00:00:00.000 Started by upstream project "autotest-per-patch" build number 132526 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:01.356 The recommended git tool is: git 00:00:01.356 using credential 00000000-0000-0000-0000-000000000002 00:00:01.358 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.371 Fetching changes from the remote Git repository 00:00:01.372 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.387 Using shallow fetch with depth 1 00:00:01.387 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.387 > git --version # timeout=10 00:00:01.400 > git --version # 'git version 2.39.2' 00:00:01.400 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.414 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.414 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.230 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.244 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.261 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.261 > git config core.sparsecheckout # timeout=10 00:00:06.276 > git read-tree -mu HEAD # timeout=10 00:00:06.296 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.325 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.325 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.416 [Pipeline] Start of Pipeline 00:00:06.430 [Pipeline] library 00:00:06.432 Loading library shm_lib@master 00:00:06.432 Library shm_lib@master is cached. Copying from home. 00:00:06.450 [Pipeline] node 00:00:21.452 Still waiting to schedule task 00:00:21.452 Waiting for next available executor on ‘vagrant-vm-host’ 00:16:08.440 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest_2 00:16:08.442 [Pipeline] { 00:16:08.455 [Pipeline] catchError 00:16:08.457 [Pipeline] { 00:16:08.472 [Pipeline] wrap 00:16:08.483 [Pipeline] { 00:16:08.494 [Pipeline] stage 00:16:08.496 [Pipeline] { (Prologue) 00:16:08.518 [Pipeline] echo 00:16:08.519 Node: VM-host-WFP1 00:16:08.527 [Pipeline] cleanWs 00:16:08.570 [WS-CLEANUP] Deleting project workspace... 00:16:08.570 [WS-CLEANUP] Deferred wipeout is used... 00:16:08.579 [WS-CLEANUP] done 00:16:08.831 [Pipeline] setCustomBuildProperty 00:16:08.927 [Pipeline] httpRequest 00:16:09.332 [Pipeline] echo 00:16:09.335 Sorcerer 10.211.164.101 is alive 00:16:09.346 [Pipeline] retry 00:16:09.348 [Pipeline] { 00:16:09.363 [Pipeline] httpRequest 00:16:09.368 HttpMethod: GET 00:16:09.369 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:09.369 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:09.370 Response Code: HTTP/1.1 200 OK 00:16:09.371 Success: Status code 200 is in the accepted range: 200,404 00:16:09.371 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:09.517 [Pipeline] } 00:16:09.536 [Pipeline] // retry 00:16:09.546 [Pipeline] sh 00:16:09.830 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:09.854 [Pipeline] httpRequest 00:16:10.256 [Pipeline] echo 00:16:10.258 Sorcerer 10.211.164.101 is alive 00:16:10.269 [Pipeline] retry 00:16:10.272 [Pipeline] { 00:16:10.288 [Pipeline] httpRequest 00:16:10.292 HttpMethod: GET 00:16:10.293 URL: http://10.211.164.101/packages/spdk_ff173863b114ffb0d2b86e2825badcc504fc5fa1.tar.gz 00:16:10.293 Sending request to url: http://10.211.164.101/packages/spdk_ff173863b114ffb0d2b86e2825badcc504fc5fa1.tar.gz 00:16:10.294 Response Code: HTTP/1.1 200 OK 00:16:10.295 Success: Status code 200 is in the accepted range: 200,404 00:16:10.296 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_ff173863b114ffb0d2b86e2825badcc504fc5fa1.tar.gz 00:16:12.573 [Pipeline] } 00:16:12.610 [Pipeline] // retry 00:16:12.634 [Pipeline] sh 00:16:12.916 + tar --no-same-owner -xf spdk_ff173863b114ffb0d2b86e2825badcc504fc5fa1.tar.gz 00:16:15.519 [Pipeline] sh 00:16:15.814 + git -C spdk log --oneline -n5 00:16:15.814 ff173863b ut/bdev: Remove duplication with many stups among unit test files 00:16:15.814 658cb4c04 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:16:15.814 fc308e3c5 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:16:15.814 e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:16:15.814 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:16:15.836 [Pipeline] writeFile 00:16:15.852 [Pipeline] sh 00:16:16.137 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:16:16.149 [Pipeline] sh 00:16:16.432 + cat autorun-spdk.conf 00:16:16.432 SPDK_RUN_FUNCTIONAL_TEST=1 00:16:16.432 SPDK_RUN_ASAN=1 00:16:16.432 SPDK_RUN_UBSAN=1 00:16:16.432 SPDK_TEST_RAID=1 00:16:16.432 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:16.440 RUN_NIGHTLY=0 00:16:16.442 [Pipeline] } 00:16:16.456 [Pipeline] // stage 00:16:16.472 [Pipeline] stage 00:16:16.475 [Pipeline] { (Run VM) 00:16:16.488 [Pipeline] sh 00:16:16.773 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:16:16.773 + echo 'Start stage prepare_nvme.sh' 00:16:16.773 Start stage prepare_nvme.sh 00:16:16.773 + [[ -n 0 ]] 00:16:16.773 + disk_prefix=ex0 00:16:16.773 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:16:16.773 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:16:16.773 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:16:16.773 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:16.773 ++ SPDK_RUN_ASAN=1 00:16:16.773 ++ SPDK_RUN_UBSAN=1 00:16:16.773 ++ SPDK_TEST_RAID=1 00:16:16.773 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:16.773 ++ RUN_NIGHTLY=0 00:16:16.773 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:16:16.773 + nvme_files=() 00:16:16.773 + declare -A nvme_files 00:16:16.773 + backend_dir=/var/lib/libvirt/images/backends 00:16:16.773 + nvme_files['nvme.img']=5G 00:16:16.773 + nvme_files['nvme-cmb.img']=5G 00:16:16.773 + nvme_files['nvme-multi0.img']=4G 00:16:16.773 + nvme_files['nvme-multi1.img']=4G 00:16:16.773 + nvme_files['nvme-multi2.img']=4G 00:16:16.773 + nvme_files['nvme-openstack.img']=8G 00:16:16.773 + nvme_files['nvme-zns.img']=5G 00:16:16.773 + (( SPDK_TEST_NVME_PMR == 1 )) 00:16:16.773 + (( SPDK_TEST_FTL == 1 )) 00:16:16.773 + (( SPDK_TEST_NVME_FDP == 1 )) 00:16:16.773 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:16:16.773 + for nvme in "${!nvme_files[@]}" 00:16:16.773 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:16:16.773 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:16:16.773 + for nvme in "${!nvme_files[@]}" 00:16:16.773 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:16:16.773 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:16:16.773 + for nvme in "${!nvme_files[@]}" 00:16:16.773 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:16:16.773 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:16:16.773 + for nvme in "${!nvme_files[@]}" 00:16:16.773 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:16:17.343 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:16:17.343 + for nvme in "${!nvme_files[@]}" 00:16:17.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:16:17.343 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:16:17.343 + for nvme in "${!nvme_files[@]}" 00:16:17.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:16:17.602 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:16:17.602 + for nvme in "${!nvme_files[@]}" 00:16:17.602 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:16:18.170 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:16:18.170 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:16:18.170 + echo 'End stage prepare_nvme.sh' 00:16:18.170 End stage prepare_nvme.sh 00:16:18.181 [Pipeline] sh 00:16:18.464 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:16:18.464 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:16:18.464 00:16:18.464 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:16:18.464 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:16:18.464 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:16:18.464 HELP=0 00:16:18.464 DRY_RUN=0 00:16:18.464 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:16:18.464 NVME_DISKS_TYPE=nvme,nvme, 00:16:18.464 NVME_AUTO_CREATE=0 00:16:18.464 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:16:18.464 NVME_CMB=,, 00:16:18.464 NVME_PMR=,, 00:16:18.464 NVME_ZNS=,, 00:16:18.464 NVME_MS=,, 00:16:18.464 NVME_FDP=,, 00:16:18.464 SPDK_VAGRANT_DISTRO=fedora39 00:16:18.464 SPDK_VAGRANT_VMCPU=10 00:16:18.464 SPDK_VAGRANT_VMRAM=12288 00:16:18.464 SPDK_VAGRANT_PROVIDER=libvirt 00:16:18.464 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:16:18.464 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:16:18.464 SPDK_OPENSTACK_NETWORK=0 00:16:18.464 VAGRANT_PACKAGE_BOX=0 00:16:18.464 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:16:18.464 FORCE_DISTRO=true 00:16:18.464 VAGRANT_BOX_VERSION= 00:16:18.464 EXTRA_VAGRANTFILES= 00:16:18.464 NIC_MODEL=e1000 00:16:18.464 00:16:18.464 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:16:18.464 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:16:20.999 Bringing machine 'default' up with 'libvirt' provider... 00:16:22.379 ==> default: Creating image (snapshot of base box volume). 00:16:22.379 ==> default: Creating domain with the following settings... 00:16:22.379 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732641171_63d22bec98eef5b09c5c 00:16:22.379 ==> default: -- Domain type: kvm 00:16:22.379 ==> default: -- Cpus: 10 00:16:22.379 ==> default: -- Feature: acpi 00:16:22.379 ==> default: -- Feature: apic 00:16:22.379 ==> default: -- Feature: pae 00:16:22.379 ==> default: -- Memory: 12288M 00:16:22.379 ==> default: -- Memory Backing: hugepages: 00:16:22.379 ==> default: -- Management MAC: 00:16:22.379 ==> default: -- Loader: 00:16:22.379 ==> default: -- Nvram: 00:16:22.379 ==> default: -- Base box: spdk/fedora39 00:16:22.379 ==> default: -- Storage pool: default 00:16:22.379 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732641171_63d22bec98eef5b09c5c.img (20G) 00:16:22.379 ==> default: -- Volume Cache: default 00:16:22.379 ==> default: -- Kernel: 00:16:22.379 ==> default: -- Initrd: 00:16:22.379 ==> default: -- Graphics Type: vnc 00:16:22.379 ==> default: -- Graphics Port: -1 00:16:22.379 ==> default: -- Graphics IP: 127.0.0.1 00:16:22.379 ==> default: -- Graphics Password: Not defined 00:16:22.379 ==> default: -- Video Type: cirrus 00:16:22.379 ==> default: -- Video VRAM: 9216 00:16:22.379 ==> default: -- Sound Type: 00:16:22.379 ==> default: -- Keymap: en-us 00:16:22.379 ==> default: -- TPM Path: 00:16:22.379 ==> default: -- INPUT: type=mouse, bus=ps2 00:16:22.379 ==> default: -- Command line args: 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:16:22.379 ==> default: -> value=-drive, 00:16:22.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:16:22.379 ==> default: -> value=-drive, 00:16:22.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:22.379 ==> default: -> value=-drive, 00:16:22.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:22.379 ==> default: -> value=-drive, 00:16:22.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:16:22.379 ==> default: -> value=-device, 00:16:22.379 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:22.638 ==> default: Creating shared folders metadata... 00:16:22.638 ==> default: Starting domain. 00:16:24.544 ==> default: Waiting for domain to get an IP address... 00:16:42.630 ==> default: Waiting for SSH to become available... 00:16:42.630 ==> default: Configuring and enabling network interfaces... 00:16:47.899 default: SSH address: 192.168.121.153:22 00:16:47.899 default: SSH username: vagrant 00:16:47.899 default: SSH auth method: private key 00:16:49.816 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:16:59.790 ==> default: Mounting SSHFS shared folder... 00:17:01.168 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:17:01.168 ==> default: Checking Mount.. 00:17:03.069 ==> default: Folder Successfully Mounted! 00:17:03.069 ==> default: Running provisioner: file... 00:17:04.006 default: ~/.gitconfig => .gitconfig 00:17:04.572 00:17:04.572 SUCCESS! 00:17:04.572 00:17:04.572 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:17:04.572 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:17:04.572 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:17:04.572 00:17:04.580 [Pipeline] } 00:17:04.596 [Pipeline] // stage 00:17:04.606 [Pipeline] dir 00:17:04.607 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:17:04.608 [Pipeline] { 00:17:04.623 [Pipeline] catchError 00:17:04.625 [Pipeline] { 00:17:04.637 [Pipeline] sh 00:17:04.914 + vagrant ssh-config --host vagrant 00:17:04.914 + sed -ne /^Host/,$p 00:17:04.914 + tee ssh_conf 00:17:08.197 Host vagrant 00:17:08.197 HostName 192.168.121.153 00:17:08.197 User vagrant 00:17:08.197 Port 22 00:17:08.197 UserKnownHostsFile /dev/null 00:17:08.197 StrictHostKeyChecking no 00:17:08.197 PasswordAuthentication no 00:17:08.197 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:17:08.197 IdentitiesOnly yes 00:17:08.197 LogLevel FATAL 00:17:08.197 ForwardAgent yes 00:17:08.197 ForwardX11 yes 00:17:08.197 00:17:08.211 [Pipeline] withEnv 00:17:08.213 [Pipeline] { 00:17:08.232 [Pipeline] sh 00:17:08.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:17:08.517 source /etc/os-release 00:17:08.517 [[ -e /image.version ]] && img=$(< /image.version) 00:17:08.517 # Minimal, systemd-like check. 00:17:08.517 if [[ -e /.dockerenv ]]; then 00:17:08.517 # Clear garbage from the node's name: 00:17:08.517 # agt-er_autotest_547-896 -> autotest_547-896 00:17:08.517 # $HOSTNAME is the actual container id 00:17:08.517 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:17:08.517 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:17:08.517 # We can assume this is a mount from a host where container is running, 00:17:08.517 # so fetch its hostname to easily identify the target swarm worker. 00:17:08.517 container="$(< /etc/hostname) ($agent)" 00:17:08.517 else 00:17:08.517 # Fallback 00:17:08.517 container=$agent 00:17:08.517 fi 00:17:08.517 fi 00:17:08.517 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:17:08.517 00:17:08.790 [Pipeline] } 00:17:08.816 [Pipeline] // withEnv 00:17:08.831 [Pipeline] setCustomBuildProperty 00:17:08.853 [Pipeline] stage 00:17:08.857 [Pipeline] { (Tests) 00:17:08.881 [Pipeline] sh 00:17:09.166 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:17:09.439 [Pipeline] sh 00:17:09.720 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:17:09.994 [Pipeline] timeout 00:17:09.994 Timeout set to expire in 1 hr 30 min 00:17:09.996 [Pipeline] { 00:17:10.014 [Pipeline] sh 00:17:10.299 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:17:10.866 HEAD is now at ff173863b ut/bdev: Remove duplication with many stups among unit test files 00:17:10.878 [Pipeline] sh 00:17:11.160 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:17:11.432 [Pipeline] sh 00:17:11.713 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:17:11.988 [Pipeline] sh 00:17:12.267 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:17:12.525 ++ readlink -f spdk_repo 00:17:12.525 + DIR_ROOT=/home/vagrant/spdk_repo 00:17:12.525 + [[ -n /home/vagrant/spdk_repo ]] 00:17:12.525 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:17:12.525 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:17:12.525 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:17:12.525 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:17:12.525 + [[ -d /home/vagrant/spdk_repo/output ]] 00:17:12.525 + [[ raid-vg-autotest == pkgdep-* ]] 00:17:12.525 + cd /home/vagrant/spdk_repo 00:17:12.525 + source /etc/os-release 00:17:12.525 ++ NAME='Fedora Linux' 00:17:12.525 ++ VERSION='39 (Cloud Edition)' 00:17:12.525 ++ ID=fedora 00:17:12.525 ++ VERSION_ID=39 00:17:12.525 ++ VERSION_CODENAME= 00:17:12.525 ++ PLATFORM_ID=platform:f39 00:17:12.525 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:17:12.525 ++ ANSI_COLOR='0;38;2;60;110;180' 00:17:12.525 ++ LOGO=fedora-logo-icon 00:17:12.525 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:17:12.525 ++ HOME_URL=https://fedoraproject.org/ 00:17:12.525 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:17:12.525 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:17:12.525 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:17:12.525 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:17:12.525 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:17:12.525 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:17:12.525 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:17:12.525 ++ SUPPORT_END=2024-11-12 00:17:12.525 ++ VARIANT='Cloud Edition' 00:17:12.525 ++ VARIANT_ID=cloud 00:17:12.525 + uname -a 00:17:12.525 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:17:12.525 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:13.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:13.091 Hugepages 00:17:13.091 node hugesize free / total 00:17:13.091 node0 1048576kB 0 / 0 00:17:13.091 node0 2048kB 0 / 0 00:17:13.091 00:17:13.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:13.091 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:13.091 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:13.091 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:13.349 + rm -f /tmp/spdk-ld-path 00:17:13.349 + source autorun-spdk.conf 00:17:13.349 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:13.349 ++ SPDK_RUN_ASAN=1 00:17:13.349 ++ SPDK_RUN_UBSAN=1 00:17:13.349 ++ SPDK_TEST_RAID=1 00:17:13.349 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:13.349 ++ RUN_NIGHTLY=0 00:17:13.349 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:17:13.349 + [[ -n '' ]] 00:17:13.349 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:17:13.349 + for M in /var/spdk/build-*-manifest.txt 00:17:13.349 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:17:13.349 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:13.349 + for M in /var/spdk/build-*-manifest.txt 00:17:13.349 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:17:13.349 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:13.349 + for M in /var/spdk/build-*-manifest.txt 00:17:13.349 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:17:13.349 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:13.349 ++ uname 00:17:13.349 + [[ Linux == \L\i\n\u\x ]] 00:17:13.349 + sudo dmesg -T 00:17:13.349 + sudo dmesg --clear 00:17:13.349 + dmesg_pid=5214 00:17:13.349 + [[ Fedora Linux == FreeBSD ]] 00:17:13.349 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:13.349 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:13.349 + sudo dmesg -Tw 00:17:13.349 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:17:13.349 + [[ -x /usr/src/fio-static/fio ]] 00:17:13.670 + export FIO_BIN=/usr/src/fio-static/fio 00:17:13.670 + FIO_BIN=/usr/src/fio-static/fio 00:17:13.670 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:17:13.670 + [[ ! -v VFIO_QEMU_BIN ]] 00:17:13.670 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:17:13.670 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:13.670 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:13.670 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:17:13.670 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:13.670 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:13.670 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:13.670 17:13:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:17:13.670 17:13:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:13.670 17:13:43 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:17:13.670 17:13:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:17:13.670 17:13:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:13.670 17:13:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:17:13.670 17:13:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.670 17:13:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:13.670 17:13:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:13.670 17:13:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.670 17:13:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.670 17:13:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.670 17:13:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.670 17:13:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.670 17:13:43 -- paths/export.sh@5 -- $ export PATH 00:17:13.670 17:13:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.670 17:13:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:13.670 17:13:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:17:13.670 17:13:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732641223.XXXXXX 00:17:13.670 17:13:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732641223.AhME2o 00:17:13.671 17:13:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:17:13.671 17:13:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:17:13.671 17:13:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:13.671 17:13:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:13.671 17:13:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:13.671 17:13:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:17:13.671 17:13:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:17:13.671 17:13:43 -- common/autotest_common.sh@10 -- $ set +x 00:17:13.671 17:13:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:17:13.671 17:13:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:17:13.671 17:13:43 -- pm/common@17 -- $ local monitor 00:17:13.671 17:13:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:13.671 17:13:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:13.671 17:13:43 -- pm/common@25 -- $ sleep 1 00:17:13.671 17:13:43 -- pm/common@21 -- $ date +%s 00:17:13.671 17:13:43 -- pm/common@21 -- $ date +%s 00:17:13.671 17:13:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641223 00:17:13.671 17:13:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641223 00:17:13.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641223_collect-cpu-load.pm.log 00:17:13.930 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641223_collect-vmstat.pm.log 00:17:14.870 17:13:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:17:14.871 17:13:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:17:14.871 17:13:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:17:14.871 17:13:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:14.871 17:13:44 -- spdk/autobuild.sh@16 -- $ date -u 00:17:14.871 Tue Nov 26 05:13:44 PM UTC 2024 00:17:14.871 17:13:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:17:14.871 v25.01-pre-252-gff173863b 00:17:14.871 17:13:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:17:14.871 17:13:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:17:14.871 17:13:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:14.871 17:13:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:14.871 17:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:17:14.871 ************************************ 00:17:14.871 START TEST asan 00:17:14.871 ************************************ 00:17:14.871 using asan 00:17:14.871 17:13:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:17:14.871 00:17:14.871 real 0m0.001s 00:17:14.871 user 0m0.000s 00:17:14.871 sys 0m0.000s 00:17:14.871 17:13:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:14.871 17:13:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:17:14.871 ************************************ 00:17:14.871 END TEST asan 00:17:14.871 ************************************ 00:17:14.871 17:13:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:17:14.871 17:13:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:17:14.871 17:13:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:14.871 17:13:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:14.871 17:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:17:14.871 ************************************ 00:17:14.871 START TEST ubsan 00:17:14.871 ************************************ 00:17:14.871 using ubsan 00:17:14.871 17:13:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:17:14.871 00:17:14.871 real 0m0.000s 00:17:14.871 user 0m0.000s 00:17:14.871 sys 0m0.000s 00:17:14.871 17:13:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:14.871 17:13:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:17:14.871 ************************************ 00:17:14.871 END TEST ubsan 00:17:14.871 ************************************ 00:17:14.871 17:13:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:17:14.871 17:13:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:17:14.871 17:13:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:17:14.871 17:13:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:17:15.128 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:15.128 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:15.384 Using 'verbs' RDMA provider 00:17:34.839 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:17:49.815 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:17:49.815 Creating mk/config.mk...done. 00:17:49.815 Creating mk/cc.flags.mk...done. 00:17:49.815 Type 'make' to build. 00:17:49.815 17:14:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:17:49.815 17:14:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:49.815 17:14:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:49.815 17:14:18 -- common/autotest_common.sh@10 -- $ set +x 00:17:49.815 ************************************ 00:17:49.815 START TEST make 00:17:49.815 ************************************ 00:17:49.815 17:14:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:17:49.815 make[1]: Nothing to be done for 'all'. 00:18:02.027 The Meson build system 00:18:02.027 Version: 1.5.0 00:18:02.027 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:18:02.027 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:18:02.027 Build type: native build 00:18:02.027 Program cat found: YES (/usr/bin/cat) 00:18:02.027 Project name: DPDK 00:18:02.027 Project version: 24.03.0 00:18:02.027 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:18:02.027 C linker for the host machine: cc ld.bfd 2.40-14 00:18:02.027 Host machine cpu family: x86_64 00:18:02.027 Host machine cpu: x86_64 00:18:02.027 Message: ## Building in Developer Mode ## 00:18:02.027 Program pkg-config found: YES (/usr/bin/pkg-config) 00:18:02.027 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:18:02.027 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:18:02.027 Program python3 found: YES (/usr/bin/python3) 00:18:02.027 Program cat found: YES (/usr/bin/cat) 00:18:02.027 Compiler for C supports arguments -march=native: YES 00:18:02.027 Checking for size of "void *" : 8 00:18:02.027 Checking for size of "void *" : 8 (cached) 00:18:02.027 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:18:02.027 Library m found: YES 00:18:02.027 Library numa found: YES 00:18:02.027 Has header "numaif.h" : YES 00:18:02.027 Library fdt found: NO 00:18:02.027 Library execinfo found: NO 00:18:02.027 Has header "execinfo.h" : YES 00:18:02.027 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:18:02.027 Run-time dependency libarchive found: NO (tried pkgconfig) 00:18:02.028 Run-time dependency libbsd found: NO (tried pkgconfig) 00:18:02.028 Run-time dependency jansson found: NO (tried pkgconfig) 00:18:02.028 Run-time dependency openssl found: YES 3.1.1 00:18:02.028 Run-time dependency libpcap found: YES 1.10.4 00:18:02.028 Has header "pcap.h" with dependency libpcap: YES 00:18:02.028 Compiler for C supports arguments -Wcast-qual: YES 00:18:02.028 Compiler for C supports arguments -Wdeprecated: YES 00:18:02.028 Compiler for C supports arguments -Wformat: YES 00:18:02.028 Compiler for C supports arguments -Wformat-nonliteral: NO 00:18:02.028 Compiler for C supports arguments -Wformat-security: NO 00:18:02.028 Compiler for C supports arguments -Wmissing-declarations: YES 00:18:02.028 Compiler for C supports arguments -Wmissing-prototypes: YES 00:18:02.028 Compiler for C supports arguments -Wnested-externs: YES 00:18:02.028 Compiler for C supports arguments -Wold-style-definition: YES 00:18:02.028 Compiler for C supports arguments -Wpointer-arith: YES 00:18:02.028 Compiler for C supports arguments -Wsign-compare: YES 00:18:02.028 Compiler for C supports arguments -Wstrict-prototypes: YES 00:18:02.028 Compiler for C supports arguments -Wundef: YES 00:18:02.028 Compiler for C supports arguments -Wwrite-strings: YES 00:18:02.028 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:18:02.028 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:18:02.028 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:18:02.028 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:18:02.028 Program objdump found: YES (/usr/bin/objdump) 00:18:02.028 Compiler for C supports arguments -mavx512f: YES 00:18:02.028 Checking if "AVX512 checking" compiles: YES 00:18:02.028 Fetching value of define "__SSE4_2__" : 1 00:18:02.028 Fetching value of define "__AES__" : 1 00:18:02.028 Fetching value of define "__AVX__" : 1 00:18:02.028 Fetching value of define "__AVX2__" : 1 00:18:02.028 Fetching value of define "__AVX512BW__" : 1 00:18:02.028 Fetching value of define "__AVX512CD__" : 1 00:18:02.028 Fetching value of define "__AVX512DQ__" : 1 00:18:02.028 Fetching value of define "__AVX512F__" : 1 00:18:02.028 Fetching value of define "__AVX512VL__" : 1 00:18:02.028 Fetching value of define "__PCLMUL__" : 1 00:18:02.028 Fetching value of define "__RDRND__" : 1 00:18:02.028 Fetching value of define "__RDSEED__" : 1 00:18:02.028 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:18:02.028 Fetching value of define "__znver1__" : (undefined) 00:18:02.028 Fetching value of define "__znver2__" : (undefined) 00:18:02.028 Fetching value of define "__znver3__" : (undefined) 00:18:02.028 Fetching value of define "__znver4__" : (undefined) 00:18:02.028 Library asan found: YES 00:18:02.028 Compiler for C supports arguments -Wno-format-truncation: YES 00:18:02.028 Message: lib/log: Defining dependency "log" 00:18:02.028 Message: lib/kvargs: Defining dependency "kvargs" 00:18:02.028 Message: lib/telemetry: Defining dependency "telemetry" 00:18:02.028 Library rt found: YES 00:18:02.028 Checking for function "getentropy" : NO 00:18:02.028 Message: lib/eal: Defining dependency "eal" 00:18:02.028 Message: lib/ring: Defining dependency "ring" 00:18:02.028 Message: lib/rcu: Defining dependency "rcu" 00:18:02.028 Message: lib/mempool: Defining dependency "mempool" 00:18:02.028 Message: lib/mbuf: Defining dependency "mbuf" 00:18:02.028 Fetching value of define "__PCLMUL__" : 1 (cached) 00:18:02.028 Fetching value of define "__AVX512F__" : 1 (cached) 00:18:02.028 Fetching value of define "__AVX512BW__" : 1 (cached) 00:18:02.028 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:18:02.028 Fetching value of define "__AVX512VL__" : 1 (cached) 00:18:02.028 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:18:02.028 Compiler for C supports arguments -mpclmul: YES 00:18:02.028 Compiler for C supports arguments -maes: YES 00:18:02.028 Compiler for C supports arguments -mavx512f: YES (cached) 00:18:02.028 Compiler for C supports arguments -mavx512bw: YES 00:18:02.028 Compiler for C supports arguments -mavx512dq: YES 00:18:02.028 Compiler for C supports arguments -mavx512vl: YES 00:18:02.028 Compiler for C supports arguments -mvpclmulqdq: YES 00:18:02.028 Compiler for C supports arguments -mavx2: YES 00:18:02.028 Compiler for C supports arguments -mavx: YES 00:18:02.028 Message: lib/net: Defining dependency "net" 00:18:02.028 Message: lib/meter: Defining dependency "meter" 00:18:02.028 Message: lib/ethdev: Defining dependency "ethdev" 00:18:02.028 Message: lib/pci: Defining dependency "pci" 00:18:02.028 Message: lib/cmdline: Defining dependency "cmdline" 00:18:02.028 Message: lib/hash: Defining dependency "hash" 00:18:02.028 Message: lib/timer: Defining dependency "timer" 00:18:02.028 Message: lib/compressdev: Defining dependency "compressdev" 00:18:02.028 Message: lib/cryptodev: Defining dependency "cryptodev" 00:18:02.028 Message: lib/dmadev: Defining dependency "dmadev" 00:18:02.028 Compiler for C supports arguments -Wno-cast-qual: YES 00:18:02.028 Message: lib/power: Defining dependency "power" 00:18:02.028 Message: lib/reorder: Defining dependency "reorder" 00:18:02.028 Message: lib/security: Defining dependency "security" 00:18:02.028 Has header "linux/userfaultfd.h" : YES 00:18:02.028 Has header "linux/vduse.h" : YES 00:18:02.028 Message: lib/vhost: Defining dependency "vhost" 00:18:02.028 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:18:02.028 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:18:02.028 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:18:02.028 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:18:02.028 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:18:02.028 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:18:02.028 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:18:02.028 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:18:02.028 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:18:02.028 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:18:02.028 Program doxygen found: YES (/usr/local/bin/doxygen) 00:18:02.028 Configuring doxy-api-html.conf using configuration 00:18:02.028 Configuring doxy-api-man.conf using configuration 00:18:02.028 Program mandb found: YES (/usr/bin/mandb) 00:18:02.028 Program sphinx-build found: NO 00:18:02.028 Configuring rte_build_config.h using configuration 00:18:02.028 Message: 00:18:02.028 ================= 00:18:02.028 Applications Enabled 00:18:02.028 ================= 00:18:02.028 00:18:02.028 apps: 00:18:02.028 00:18:02.028 00:18:02.028 Message: 00:18:02.028 ================= 00:18:02.028 Libraries Enabled 00:18:02.028 ================= 00:18:02.028 00:18:02.028 libs: 00:18:02.028 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:18:02.028 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:18:02.028 cryptodev, dmadev, power, reorder, security, vhost, 00:18:02.028 00:18:02.028 Message: 00:18:02.028 =============== 00:18:02.028 Drivers Enabled 00:18:02.028 =============== 00:18:02.028 00:18:02.028 common: 00:18:02.028 00:18:02.028 bus: 00:18:02.028 pci, vdev, 00:18:02.028 mempool: 00:18:02.028 ring, 00:18:02.028 dma: 00:18:02.028 00:18:02.028 net: 00:18:02.028 00:18:02.028 crypto: 00:18:02.028 00:18:02.028 compress: 00:18:02.028 00:18:02.028 vdpa: 00:18:02.028 00:18:02.028 00:18:02.028 Message: 00:18:02.028 ================= 00:18:02.028 Content Skipped 00:18:02.028 ================= 00:18:02.028 00:18:02.028 apps: 00:18:02.028 dumpcap: explicitly disabled via build config 00:18:02.028 graph: explicitly disabled via build config 00:18:02.028 pdump: explicitly disabled via build config 00:18:02.028 proc-info: explicitly disabled via build config 00:18:02.028 test-acl: explicitly disabled via build config 00:18:02.028 test-bbdev: explicitly disabled via build config 00:18:02.028 test-cmdline: explicitly disabled via build config 00:18:02.028 test-compress-perf: explicitly disabled via build config 00:18:02.028 test-crypto-perf: explicitly disabled via build config 00:18:02.028 test-dma-perf: explicitly disabled via build config 00:18:02.028 test-eventdev: explicitly disabled via build config 00:18:02.028 test-fib: explicitly disabled via build config 00:18:02.028 test-flow-perf: explicitly disabled via build config 00:18:02.028 test-gpudev: explicitly disabled via build config 00:18:02.028 test-mldev: explicitly disabled via build config 00:18:02.028 test-pipeline: explicitly disabled via build config 00:18:02.028 test-pmd: explicitly disabled via build config 00:18:02.028 test-regex: explicitly disabled via build config 00:18:02.029 test-sad: explicitly disabled via build config 00:18:02.029 test-security-perf: explicitly disabled via build config 00:18:02.029 00:18:02.029 libs: 00:18:02.029 argparse: explicitly disabled via build config 00:18:02.029 metrics: explicitly disabled via build config 00:18:02.029 acl: explicitly disabled via build config 00:18:02.029 bbdev: explicitly disabled via build config 00:18:02.029 bitratestats: explicitly disabled via build config 00:18:02.029 bpf: explicitly disabled via build config 00:18:02.029 cfgfile: explicitly disabled via build config 00:18:02.029 distributor: explicitly disabled via build config 00:18:02.029 efd: explicitly disabled via build config 00:18:02.029 eventdev: explicitly disabled via build config 00:18:02.029 dispatcher: explicitly disabled via build config 00:18:02.029 gpudev: explicitly disabled via build config 00:18:02.029 gro: explicitly disabled via build config 00:18:02.029 gso: explicitly disabled via build config 00:18:02.029 ip_frag: explicitly disabled via build config 00:18:02.029 jobstats: explicitly disabled via build config 00:18:02.029 latencystats: explicitly disabled via build config 00:18:02.029 lpm: explicitly disabled via build config 00:18:02.029 member: explicitly disabled via build config 00:18:02.029 pcapng: explicitly disabled via build config 00:18:02.029 rawdev: explicitly disabled via build config 00:18:02.029 regexdev: explicitly disabled via build config 00:18:02.029 mldev: explicitly disabled via build config 00:18:02.029 rib: explicitly disabled via build config 00:18:02.029 sched: explicitly disabled via build config 00:18:02.029 stack: explicitly disabled via build config 00:18:02.029 ipsec: explicitly disabled via build config 00:18:02.029 pdcp: explicitly disabled via build config 00:18:02.029 fib: explicitly disabled via build config 00:18:02.029 port: explicitly disabled via build config 00:18:02.029 pdump: explicitly disabled via build config 00:18:02.029 table: explicitly disabled via build config 00:18:02.029 pipeline: explicitly disabled via build config 00:18:02.029 graph: explicitly disabled via build config 00:18:02.029 node: explicitly disabled via build config 00:18:02.029 00:18:02.029 drivers: 00:18:02.029 common/cpt: not in enabled drivers build config 00:18:02.029 common/dpaax: not in enabled drivers build config 00:18:02.029 common/iavf: not in enabled drivers build config 00:18:02.029 common/idpf: not in enabled drivers build config 00:18:02.029 common/ionic: not in enabled drivers build config 00:18:02.029 common/mvep: not in enabled drivers build config 00:18:02.029 common/octeontx: not in enabled drivers build config 00:18:02.029 bus/auxiliary: not in enabled drivers build config 00:18:02.029 bus/cdx: not in enabled drivers build config 00:18:02.029 bus/dpaa: not in enabled drivers build config 00:18:02.029 bus/fslmc: not in enabled drivers build config 00:18:02.029 bus/ifpga: not in enabled drivers build config 00:18:02.029 bus/platform: not in enabled drivers build config 00:18:02.029 bus/uacce: not in enabled drivers build config 00:18:02.029 bus/vmbus: not in enabled drivers build config 00:18:02.029 common/cnxk: not in enabled drivers build config 00:18:02.029 common/mlx5: not in enabled drivers build config 00:18:02.029 common/nfp: not in enabled drivers build config 00:18:02.029 common/nitrox: not in enabled drivers build config 00:18:02.029 common/qat: not in enabled drivers build config 00:18:02.029 common/sfc_efx: not in enabled drivers build config 00:18:02.029 mempool/bucket: not in enabled drivers build config 00:18:02.029 mempool/cnxk: not in enabled drivers build config 00:18:02.029 mempool/dpaa: not in enabled drivers build config 00:18:02.029 mempool/dpaa2: not in enabled drivers build config 00:18:02.029 mempool/octeontx: not in enabled drivers build config 00:18:02.029 mempool/stack: not in enabled drivers build config 00:18:02.029 dma/cnxk: not in enabled drivers build config 00:18:02.029 dma/dpaa: not in enabled drivers build config 00:18:02.029 dma/dpaa2: not in enabled drivers build config 00:18:02.029 dma/hisilicon: not in enabled drivers build config 00:18:02.029 dma/idxd: not in enabled drivers build config 00:18:02.029 dma/ioat: not in enabled drivers build config 00:18:02.029 dma/skeleton: not in enabled drivers build config 00:18:02.029 net/af_packet: not in enabled drivers build config 00:18:02.029 net/af_xdp: not in enabled drivers build config 00:18:02.029 net/ark: not in enabled drivers build config 00:18:02.029 net/atlantic: not in enabled drivers build config 00:18:02.029 net/avp: not in enabled drivers build config 00:18:02.029 net/axgbe: not in enabled drivers build config 00:18:02.029 net/bnx2x: not in enabled drivers build config 00:18:02.029 net/bnxt: not in enabled drivers build config 00:18:02.029 net/bonding: not in enabled drivers build config 00:18:02.029 net/cnxk: not in enabled drivers build config 00:18:02.029 net/cpfl: not in enabled drivers build config 00:18:02.029 net/cxgbe: not in enabled drivers build config 00:18:02.029 net/dpaa: not in enabled drivers build config 00:18:02.029 net/dpaa2: not in enabled drivers build config 00:18:02.029 net/e1000: not in enabled drivers build config 00:18:02.029 net/ena: not in enabled drivers build config 00:18:02.029 net/enetc: not in enabled drivers build config 00:18:02.029 net/enetfec: not in enabled drivers build config 00:18:02.029 net/enic: not in enabled drivers build config 00:18:02.029 net/failsafe: not in enabled drivers build config 00:18:02.029 net/fm10k: not in enabled drivers build config 00:18:02.029 net/gve: not in enabled drivers build config 00:18:02.029 net/hinic: not in enabled drivers build config 00:18:02.029 net/hns3: not in enabled drivers build config 00:18:02.029 net/i40e: not in enabled drivers build config 00:18:02.029 net/iavf: not in enabled drivers build config 00:18:02.029 net/ice: not in enabled drivers build config 00:18:02.029 net/idpf: not in enabled drivers build config 00:18:02.029 net/igc: not in enabled drivers build config 00:18:02.029 net/ionic: not in enabled drivers build config 00:18:02.029 net/ipn3ke: not in enabled drivers build config 00:18:02.029 net/ixgbe: not in enabled drivers build config 00:18:02.029 net/mana: not in enabled drivers build config 00:18:02.029 net/memif: not in enabled drivers build config 00:18:02.029 net/mlx4: not in enabled drivers build config 00:18:02.029 net/mlx5: not in enabled drivers build config 00:18:02.029 net/mvneta: not in enabled drivers build config 00:18:02.029 net/mvpp2: not in enabled drivers build config 00:18:02.029 net/netvsc: not in enabled drivers build config 00:18:02.029 net/nfb: not in enabled drivers build config 00:18:02.029 net/nfp: not in enabled drivers build config 00:18:02.029 net/ngbe: not in enabled drivers build config 00:18:02.029 net/null: not in enabled drivers build config 00:18:02.029 net/octeontx: not in enabled drivers build config 00:18:02.029 net/octeon_ep: not in enabled drivers build config 00:18:02.029 net/pcap: not in enabled drivers build config 00:18:02.029 net/pfe: not in enabled drivers build config 00:18:02.029 net/qede: not in enabled drivers build config 00:18:02.029 net/ring: not in enabled drivers build config 00:18:02.029 net/sfc: not in enabled drivers build config 00:18:02.029 net/softnic: not in enabled drivers build config 00:18:02.029 net/tap: not in enabled drivers build config 00:18:02.029 net/thunderx: not in enabled drivers build config 00:18:02.029 net/txgbe: not in enabled drivers build config 00:18:02.029 net/vdev_netvsc: not in enabled drivers build config 00:18:02.029 net/vhost: not in enabled drivers build config 00:18:02.029 net/virtio: not in enabled drivers build config 00:18:02.029 net/vmxnet3: not in enabled drivers build config 00:18:02.029 raw/*: missing internal dependency, "rawdev" 00:18:02.029 crypto/armv8: not in enabled drivers build config 00:18:02.029 crypto/bcmfs: not in enabled drivers build config 00:18:02.029 crypto/caam_jr: not in enabled drivers build config 00:18:02.029 crypto/ccp: not in enabled drivers build config 00:18:02.029 crypto/cnxk: not in enabled drivers build config 00:18:02.029 crypto/dpaa_sec: not in enabled drivers build config 00:18:02.029 crypto/dpaa2_sec: not in enabled drivers build config 00:18:02.029 crypto/ipsec_mb: not in enabled drivers build config 00:18:02.029 crypto/mlx5: not in enabled drivers build config 00:18:02.029 crypto/mvsam: not in enabled drivers build config 00:18:02.029 crypto/nitrox: not in enabled drivers build config 00:18:02.029 crypto/null: not in enabled drivers build config 00:18:02.029 crypto/octeontx: not in enabled drivers build config 00:18:02.029 crypto/openssl: not in enabled drivers build config 00:18:02.029 crypto/scheduler: not in enabled drivers build config 00:18:02.029 crypto/uadk: not in enabled drivers build config 00:18:02.029 crypto/virtio: not in enabled drivers build config 00:18:02.029 compress/isal: not in enabled drivers build config 00:18:02.029 compress/mlx5: not in enabled drivers build config 00:18:02.029 compress/nitrox: not in enabled drivers build config 00:18:02.029 compress/octeontx: not in enabled drivers build config 00:18:02.029 compress/zlib: not in enabled drivers build config 00:18:02.029 regex/*: missing internal dependency, "regexdev" 00:18:02.029 ml/*: missing internal dependency, "mldev" 00:18:02.029 vdpa/ifc: not in enabled drivers build config 00:18:02.029 vdpa/mlx5: not in enabled drivers build config 00:18:02.029 vdpa/nfp: not in enabled drivers build config 00:18:02.029 vdpa/sfc: not in enabled drivers build config 00:18:02.029 event/*: missing internal dependency, "eventdev" 00:18:02.029 baseband/*: missing internal dependency, "bbdev" 00:18:02.029 gpu/*: missing internal dependency, "gpudev" 00:18:02.029 00:18:02.029 00:18:02.029 Build targets in project: 85 00:18:02.029 00:18:02.029 DPDK 24.03.0 00:18:02.029 00:18:02.029 User defined options 00:18:02.029 buildtype : debug 00:18:02.029 default_library : shared 00:18:02.029 libdir : lib 00:18:02.029 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:02.030 b_sanitize : address 00:18:02.030 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:18:02.030 c_link_args : 00:18:02.030 cpu_instruction_set: native 00:18:02.030 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:18:02.030 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:18:02.030 enable_docs : false 00:18:02.030 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:18:02.030 enable_kmods : false 00:18:02.030 max_lcores : 128 00:18:02.030 tests : false 00:18:02.030 00:18:02.030 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:02.030 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:18:02.030 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:18:02.030 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:18:02.030 [3/268] Linking static target lib/librte_kvargs.a 00:18:02.030 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:18:02.030 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:18:02.030 [6/268] Linking static target lib/librte_log.a 00:18:02.030 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:18:02.030 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:18:02.030 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:18:02.030 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:18:02.030 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:18:02.030 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:18:02.030 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:18:02.030 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:18:02.030 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:18:02.030 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:18:02.030 [17/268] Linking static target lib/librte_telemetry.a 00:18:02.030 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:18:02.030 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:18:02.030 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:18:02.030 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:18:02.030 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:18:02.030 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:18:02.288 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:18:02.288 [25/268] Linking target lib/librte_log.so.24.1 00:18:02.288 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:18:02.288 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:18:02.288 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:18:02.288 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:18:02.546 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:18:02.546 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:18:02.546 [32/268] Linking target lib/librte_kvargs.so.24.1 00:18:02.546 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:18:02.546 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:18:02.803 [35/268] Linking target lib/librte_telemetry.so.24.1 00:18:02.803 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:18:02.803 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:18:02.803 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:18:03.062 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:18:03.062 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:18:03.062 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:18:03.062 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:18:03.062 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:18:03.062 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:18:03.062 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:18:03.062 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:18:03.062 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:18:03.320 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:18:03.577 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:18:03.578 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:18:03.578 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:18:03.578 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:18:03.835 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:18:03.835 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:18:03.835 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:18:03.835 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:18:03.835 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:18:03.835 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:18:04.094 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:18:04.094 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:18:04.094 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:18:04.094 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:18:04.352 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:18:04.352 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:18:04.352 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:18:04.352 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:18:04.611 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:18:04.611 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:18:04.611 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:18:04.611 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:18:04.869 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:18:04.869 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:18:04.869 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:18:04.869 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:18:04.869 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:18:04.869 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:18:04.869 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:18:05.127 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:18:05.127 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:18:05.127 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:18:05.127 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:18:05.127 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:18:05.387 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:18:05.387 [84/268] Linking static target lib/librte_ring.a 00:18:05.387 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:18:05.387 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:18:05.387 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:18:05.645 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:18:05.645 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:18:05.645 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:18:05.903 [91/268] Linking static target lib/librte_eal.a 00:18:05.903 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:18:05.903 [93/268] Linking static target lib/librte_mempool.a 00:18:05.903 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:18:05.903 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:18:05.903 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:18:05.903 [97/268] Linking static target lib/librte_rcu.a 00:18:05.903 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:18:05.903 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:18:05.903 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:18:06.469 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:18:06.469 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:18:06.469 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:18:06.469 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:18:06.469 [105/268] Linking static target lib/librte_meter.a 00:18:06.469 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:18:06.469 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:18:06.726 [108/268] Linking static target lib/librte_net.a 00:18:06.726 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:18:06.726 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:18:06.726 [111/268] Linking static target lib/librte_mbuf.a 00:18:06.985 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:18:06.985 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:18:06.985 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:18:06.985 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:18:07.243 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:18:07.244 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.244 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.875 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:18:07.875 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:18:07.875 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.875 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:18:07.875 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:18:07.875 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:18:08.147 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:18:08.147 [126/268] Linking static target lib/librte_pci.a 00:18:08.406 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:18:08.406 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:18:08.406 [129/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:08.406 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:18:08.664 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:18:08.664 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:18:08.664 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:18:08.664 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:18:08.664 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:18:08.664 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:18:08.664 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:18:08.664 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:18:08.665 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:18:08.923 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:18:08.923 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:18:08.923 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:18:08.923 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:18:08.923 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:18:08.923 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:18:08.923 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:18:09.181 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:18:09.439 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:18:09.439 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:18:09.439 [150/268] Linking static target lib/librte_timer.a 00:18:09.439 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:18:09.439 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:18:09.439 [153/268] Linking static target lib/librte_cmdline.a 00:18:09.439 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:18:09.697 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:18:09.956 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:18:09.956 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:18:09.956 [158/268] Linking static target lib/librte_compressdev.a 00:18:09.956 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:18:09.956 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:18:10.214 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:18:10.214 [162/268] Linking static target lib/librte_ethdev.a 00:18:10.214 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:18:10.214 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:18:10.214 [165/268] Linking static target lib/librte_hash.a 00:18:10.472 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:18:10.472 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:18:10.472 [168/268] Linking static target lib/librte_dmadev.a 00:18:10.472 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:18:10.730 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:18:10.730 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:18:10.730 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:18:10.989 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:18:10.989 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:11.248 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:18:11.248 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:11.507 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:18:11.507 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:18:11.507 [179/268] Linking static target lib/librte_cryptodev.a 00:18:11.507 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:11.507 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:18:11.507 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:18:11.507 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:18:11.765 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:18:12.024 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:18:12.024 [186/268] Linking static target lib/librte_power.a 00:18:12.024 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:18:12.024 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:18:12.024 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:18:12.281 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:18:12.281 [191/268] Linking static target lib/librte_reorder.a 00:18:12.281 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:18:12.281 [193/268] Linking static target lib/librte_security.a 00:18:12.847 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:18:12.847 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:18:13.104 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:18:13.363 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:18:13.363 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:18:13.363 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:18:13.363 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:18:13.621 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:18:13.879 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:18:13.879 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:18:13.879 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:18:13.879 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:18:14.164 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:14.164 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:18:14.164 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:18:14.164 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:18:14.423 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:18:14.423 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:18:14.423 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:18:14.682 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:14.682 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:14.682 [215/268] Linking static target drivers/librte_bus_vdev.a 00:18:14.682 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:18:14.682 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:14.682 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:14.682 [219/268] Linking static target drivers/librte_bus_pci.a 00:18:14.682 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:18:14.682 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:18:14.941 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:18:14.941 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:14.941 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:14.941 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:14.941 [226/268] Linking static target drivers/librte_mempool_ring.a 00:18:15.199 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:16.578 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:18:18.479 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:18:18.737 [230/268] Linking target lib/librte_eal.so.24.1 00:18:18.993 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:18:18.993 [232/268] Linking target lib/librte_meter.so.24.1 00:18:18.993 [233/268] Linking target lib/librte_pci.so.24.1 00:18:18.993 [234/268] Linking target lib/librte_ring.so.24.1 00:18:18.993 [235/268] Linking target lib/librte_dmadev.so.24.1 00:18:18.993 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:18:18.993 [237/268] Linking target lib/librte_timer.so.24.1 00:18:18.993 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:18:19.257 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:18:19.257 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:18:19.257 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:18:19.257 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:18:19.257 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:18:19.257 [244/268] Linking target lib/librte_rcu.so.24.1 00:18:19.257 [245/268] Linking target lib/librte_mempool.so.24.1 00:18:19.257 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:18:19.257 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:18:19.516 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:18:19.516 [249/268] Linking target lib/librte_mbuf.so.24.1 00:18:19.516 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:19.516 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:18:19.516 [252/268] Linking target lib/librte_net.so.24.1 00:18:19.516 [253/268] Linking target lib/librte_compressdev.so.24.1 00:18:19.516 [254/268] Linking target lib/librte_reorder.so.24.1 00:18:19.516 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:18:19.774 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:18:19.774 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:18:19.774 [258/268] Linking target lib/librte_cmdline.so.24.1 00:18:19.774 [259/268] Linking target lib/librte_hash.so.24.1 00:18:19.774 [260/268] Linking target lib/librte_security.so.24.1 00:18:19.774 [261/268] Linking target lib/librte_ethdev.so.24.1 00:18:20.033 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:18:20.033 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:18:20.293 [264/268] Linking target lib/librte_power.so.24.1 00:18:21.228 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:18:21.485 [266/268] Linking static target lib/librte_vhost.a 00:18:23.383 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:18:23.383 [268/268] Linking target lib/librte_vhost.so.24.1 00:18:23.383 INFO: autodetecting backend as ninja 00:18:23.383 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:18:49.938 CC lib/log/log_flags.o 00:18:49.938 CC lib/log/log.o 00:18:49.938 CC lib/log/log_deprecated.o 00:18:49.938 CC lib/ut/ut.o 00:18:49.938 CC lib/ut_mock/mock.o 00:18:49.938 LIB libspdk_log.a 00:18:49.938 LIB libspdk_ut.a 00:18:49.938 LIB libspdk_ut_mock.a 00:18:49.938 SO libspdk_ut.so.2.0 00:18:49.938 SO libspdk_ut_mock.so.6.0 00:18:49.938 SO libspdk_log.so.7.1 00:18:49.938 SYMLINK libspdk_ut_mock.so 00:18:49.938 SYMLINK libspdk_ut.so 00:18:49.938 SYMLINK libspdk_log.so 00:18:49.938 CC lib/ioat/ioat.o 00:18:49.938 CC lib/dma/dma.o 00:18:49.938 CXX lib/trace_parser/trace.o 00:18:49.938 CC lib/util/base64.o 00:18:49.938 CC lib/util/crc16.o 00:18:49.938 CC lib/util/bit_array.o 00:18:49.938 CC lib/util/cpuset.o 00:18:49.938 CC lib/util/crc32c.o 00:18:49.938 CC lib/util/crc32.o 00:18:49.938 CC lib/util/crc32_ieee.o 00:18:49.938 CC lib/vfio_user/host/vfio_user_pci.o 00:18:49.938 CC lib/util/crc64.o 00:18:49.938 CC lib/util/dif.o 00:18:49.938 LIB libspdk_dma.a 00:18:49.938 SO libspdk_dma.so.5.0 00:18:49.938 CC lib/vfio_user/host/vfio_user.o 00:18:49.938 CC lib/util/fd.o 00:18:49.938 CC lib/util/fd_group.o 00:18:49.938 CC lib/util/file.o 00:18:49.938 LIB libspdk_ioat.a 00:18:49.938 SYMLINK libspdk_dma.so 00:18:49.938 CC lib/util/hexlify.o 00:18:49.938 CC lib/util/iov.o 00:18:49.938 SO libspdk_ioat.so.7.0 00:18:49.938 SYMLINK libspdk_ioat.so 00:18:49.938 CC lib/util/math.o 00:18:49.938 CC lib/util/net.o 00:18:49.938 CC lib/util/pipe.o 00:18:49.938 CC lib/util/strerror_tls.o 00:18:49.938 LIB libspdk_vfio_user.a 00:18:49.938 CC lib/util/string.o 00:18:49.938 SO libspdk_vfio_user.so.5.0 00:18:49.938 CC lib/util/uuid.o 00:18:49.938 CC lib/util/xor.o 00:18:49.938 SYMLINK libspdk_vfio_user.so 00:18:49.938 CC lib/util/zipf.o 00:18:49.938 CC lib/util/md5.o 00:18:49.938 LIB libspdk_util.a 00:18:49.938 LIB libspdk_trace_parser.a 00:18:49.938 SO libspdk_util.so.10.1 00:18:49.938 SO libspdk_trace_parser.so.6.0 00:18:49.938 SYMLINK libspdk_util.so 00:18:49.938 SYMLINK libspdk_trace_parser.so 00:18:49.938 CC lib/conf/conf.o 00:18:49.938 CC lib/rdma_utils/rdma_utils.o 00:18:49.938 CC lib/json/json_parse.o 00:18:49.938 CC lib/json/json_util.o 00:18:49.938 CC lib/json/json_write.o 00:18:49.938 CC lib/vmd/vmd.o 00:18:49.938 CC lib/idxd/idxd.o 00:18:49.938 CC lib/idxd/idxd_user.o 00:18:49.938 CC lib/vmd/led.o 00:18:49.938 CC lib/env_dpdk/env.o 00:18:49.938 CC lib/idxd/idxd_kernel.o 00:18:49.938 LIB libspdk_conf.a 00:18:49.938 CC lib/env_dpdk/memory.o 00:18:49.938 CC lib/env_dpdk/pci.o 00:18:49.938 SO libspdk_conf.so.6.0 00:18:49.938 CC lib/env_dpdk/init.o 00:18:49.938 LIB libspdk_json.a 00:18:49.938 LIB libspdk_rdma_utils.a 00:18:49.938 SYMLINK libspdk_conf.so 00:18:49.938 CC lib/env_dpdk/threads.o 00:18:49.938 CC lib/env_dpdk/pci_ioat.o 00:18:49.938 SO libspdk_json.so.6.0 00:18:49.938 SO libspdk_rdma_utils.so.1.0 00:18:49.938 SYMLINK libspdk_rdma_utils.so 00:18:49.938 SYMLINK libspdk_json.so 00:18:49.938 CC lib/env_dpdk/pci_virtio.o 00:18:49.938 CC lib/env_dpdk/pci_vmd.o 00:18:49.938 CC lib/env_dpdk/pci_idxd.o 00:18:49.938 CC lib/env_dpdk/pci_event.o 00:18:49.938 CC lib/env_dpdk/sigbus_handler.o 00:18:49.938 CC lib/rdma_provider/common.o 00:18:49.938 CC lib/env_dpdk/pci_dpdk.o 00:18:49.938 LIB libspdk_idxd.a 00:18:49.938 CC lib/env_dpdk/pci_dpdk_2207.o 00:18:49.938 SO libspdk_idxd.so.12.1 00:18:49.938 CC lib/env_dpdk/pci_dpdk_2211.o 00:18:49.938 LIB libspdk_vmd.a 00:18:49.938 CC lib/rdma_provider/rdma_provider_verbs.o 00:18:49.938 SYMLINK libspdk_idxd.so 00:18:49.938 SO libspdk_vmd.so.6.0 00:18:49.938 SYMLINK libspdk_vmd.so 00:18:49.938 CC lib/jsonrpc/jsonrpc_client.o 00:18:49.938 CC lib/jsonrpc/jsonrpc_server.o 00:18:49.938 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:18:49.938 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:18:49.938 LIB libspdk_rdma_provider.a 00:18:49.938 SO libspdk_rdma_provider.so.7.0 00:18:49.938 SYMLINK libspdk_rdma_provider.so 00:18:49.938 LIB libspdk_jsonrpc.a 00:18:49.938 SO libspdk_jsonrpc.so.6.0 00:18:49.938 SYMLINK libspdk_jsonrpc.so 00:18:50.197 LIB libspdk_env_dpdk.a 00:18:50.197 CC lib/rpc/rpc.o 00:18:50.456 SO libspdk_env_dpdk.so.15.1 00:18:50.456 SYMLINK libspdk_env_dpdk.so 00:18:50.456 LIB libspdk_rpc.a 00:18:50.714 SO libspdk_rpc.so.6.0 00:18:50.714 SYMLINK libspdk_rpc.so 00:18:50.973 CC lib/trace/trace_flags.o 00:18:50.973 CC lib/trace/trace.o 00:18:50.973 CC lib/trace/trace_rpc.o 00:18:50.973 CC lib/keyring/keyring.o 00:18:50.973 CC lib/notify/notify.o 00:18:50.973 CC lib/notify/notify_rpc.o 00:18:50.973 CC lib/keyring/keyring_rpc.o 00:18:51.231 LIB libspdk_notify.a 00:18:51.231 SO libspdk_notify.so.6.0 00:18:51.231 LIB libspdk_trace.a 00:18:51.231 LIB libspdk_keyring.a 00:18:51.231 SYMLINK libspdk_notify.so 00:18:51.491 SO libspdk_trace.so.11.0 00:18:51.491 SO libspdk_keyring.so.2.0 00:18:51.491 SYMLINK libspdk_trace.so 00:18:51.491 SYMLINK libspdk_keyring.so 00:18:51.750 CC lib/sock/sock.o 00:18:51.750 CC lib/sock/sock_rpc.o 00:18:51.750 CC lib/thread/iobuf.o 00:18:51.750 CC lib/thread/thread.o 00:18:52.326 LIB libspdk_sock.a 00:18:52.326 SO libspdk_sock.so.10.0 00:18:52.326 SYMLINK libspdk_sock.so 00:18:52.989 CC lib/nvme/nvme_ctrlr.o 00:18:52.989 CC lib/nvme/nvme_ctrlr_cmd.o 00:18:52.989 CC lib/nvme/nvme_fabric.o 00:18:52.989 CC lib/nvme/nvme_ns.o 00:18:52.989 CC lib/nvme/nvme_ns_cmd.o 00:18:52.989 CC lib/nvme/nvme_pcie.o 00:18:52.989 CC lib/nvme/nvme_pcie_common.o 00:18:52.989 CC lib/nvme/nvme_qpair.o 00:18:52.989 CC lib/nvme/nvme.o 00:18:53.555 LIB libspdk_thread.a 00:18:53.555 CC lib/nvme/nvme_quirks.o 00:18:53.555 SO libspdk_thread.so.11.0 00:18:53.555 CC lib/nvme/nvme_transport.o 00:18:53.555 CC lib/nvme/nvme_discovery.o 00:18:53.555 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:18:53.555 SYMLINK libspdk_thread.so 00:18:53.555 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:18:53.814 CC lib/nvme/nvme_tcp.o 00:18:53.814 CC lib/nvme/nvme_opal.o 00:18:53.814 CC lib/nvme/nvme_io_msg.o 00:18:54.073 CC lib/nvme/nvme_poll_group.o 00:18:54.332 CC lib/nvme/nvme_zns.o 00:18:54.332 CC lib/nvme/nvme_stubs.o 00:18:54.332 CC lib/accel/accel.o 00:18:54.332 CC lib/blob/blobstore.o 00:18:54.591 CC lib/blob/request.o 00:18:54.592 CC lib/accel/accel_rpc.o 00:18:54.592 CC lib/accel/accel_sw.o 00:18:54.851 CC lib/nvme/nvme_auth.o 00:18:54.851 CC lib/blob/zeroes.o 00:18:54.851 CC lib/blob/blob_bs_dev.o 00:18:54.851 CC lib/nvme/nvme_cuse.o 00:18:55.111 CC lib/init/json_config.o 00:18:55.111 CC lib/init/subsystem.o 00:18:55.111 CC lib/virtio/virtio.o 00:18:55.111 CC lib/fsdev/fsdev.o 00:18:55.370 CC lib/nvme/nvme_rdma.o 00:18:55.370 CC lib/init/subsystem_rpc.o 00:18:55.370 CC lib/init/rpc.o 00:18:55.629 CC lib/fsdev/fsdev_io.o 00:18:55.629 CC lib/virtio/virtio_vhost_user.o 00:18:55.629 LIB libspdk_init.a 00:18:55.629 CC lib/fsdev/fsdev_rpc.o 00:18:55.629 SO libspdk_init.so.6.0 00:18:55.889 SYMLINK libspdk_init.so 00:18:55.889 CC lib/virtio/virtio_vfio_user.o 00:18:55.889 CC lib/virtio/virtio_pci.o 00:18:55.889 LIB libspdk_accel.a 00:18:55.889 SO libspdk_accel.so.16.0 00:18:55.889 LIB libspdk_fsdev.a 00:18:55.889 SYMLINK libspdk_accel.so 00:18:55.889 CC lib/event/app.o 00:18:55.889 CC lib/event/app_rpc.o 00:18:55.889 CC lib/event/reactor.o 00:18:55.889 CC lib/event/log_rpc.o 00:18:56.148 SO libspdk_fsdev.so.2.0 00:18:56.148 CC lib/event/scheduler_static.o 00:18:56.148 SYMLINK libspdk_fsdev.so 00:18:56.148 LIB libspdk_virtio.a 00:18:56.148 CC lib/bdev/bdev.o 00:18:56.148 CC lib/bdev/bdev_rpc.o 00:18:56.148 CC lib/bdev/bdev_zone.o 00:18:56.148 SO libspdk_virtio.so.7.0 00:18:56.407 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:18:56.407 CC lib/bdev/part.o 00:18:56.407 SYMLINK libspdk_virtio.so 00:18:56.407 CC lib/bdev/scsi_nvme.o 00:18:56.667 LIB libspdk_event.a 00:18:56.667 SO libspdk_event.so.14.0 00:18:56.925 SYMLINK libspdk_event.so 00:18:56.925 LIB libspdk_nvme.a 00:18:56.925 LIB libspdk_fuse_dispatcher.a 00:18:57.249 SO libspdk_fuse_dispatcher.so.1.0 00:18:57.249 SYMLINK libspdk_fuse_dispatcher.so 00:18:57.249 SO libspdk_nvme.so.15.0 00:18:57.508 SYMLINK libspdk_nvme.so 00:18:58.445 LIB libspdk_blob.a 00:18:58.445 SO libspdk_blob.so.12.0 00:18:58.703 SYMLINK libspdk_blob.so 00:18:58.963 CC lib/lvol/lvol.o 00:18:58.963 CC lib/blobfs/blobfs.o 00:18:58.963 CC lib/blobfs/tree.o 00:18:59.226 LIB libspdk_bdev.a 00:18:59.227 SO libspdk_bdev.so.17.0 00:18:59.490 SYMLINK libspdk_bdev.so 00:18:59.750 CC lib/ublk/ublk.o 00:18:59.750 CC lib/ublk/ublk_rpc.o 00:18:59.750 CC lib/ftl/ftl_core.o 00:18:59.750 CC lib/nbd/nbd.o 00:18:59.750 CC lib/nbd/nbd_rpc.o 00:18:59.750 CC lib/ftl/ftl_init.o 00:18:59.750 CC lib/nvmf/ctrlr.o 00:18:59.750 CC lib/scsi/dev.o 00:19:00.009 CC lib/scsi/lun.o 00:19:00.009 LIB libspdk_blobfs.a 00:19:00.009 CC lib/scsi/port.o 00:19:00.009 CC lib/ftl/ftl_layout.o 00:19:00.009 SO libspdk_blobfs.so.11.0 00:19:00.009 CC lib/ftl/ftl_debug.o 00:19:00.009 SYMLINK libspdk_blobfs.so 00:19:00.009 CC lib/ftl/ftl_io.o 00:19:00.009 LIB libspdk_lvol.a 00:19:00.009 CC lib/scsi/scsi.o 00:19:00.009 SO libspdk_lvol.so.11.0 00:19:00.268 LIB libspdk_nbd.a 00:19:00.268 SYMLINK libspdk_lvol.so 00:19:00.268 CC lib/ftl/ftl_sb.o 00:19:00.268 CC lib/nvmf/ctrlr_discovery.o 00:19:00.268 SO libspdk_nbd.so.7.0 00:19:00.268 CC lib/nvmf/ctrlr_bdev.o 00:19:00.268 CC lib/scsi/scsi_bdev.o 00:19:00.268 CC lib/nvmf/subsystem.o 00:19:00.268 SYMLINK libspdk_nbd.so 00:19:00.268 CC lib/nvmf/nvmf.o 00:19:00.268 CC lib/nvmf/nvmf_rpc.o 00:19:00.268 CC lib/nvmf/transport.o 00:19:00.528 CC lib/ftl/ftl_l2p.o 00:19:00.528 LIB libspdk_ublk.a 00:19:00.528 SO libspdk_ublk.so.3.0 00:19:00.528 SYMLINK libspdk_ublk.so 00:19:00.528 CC lib/ftl/ftl_l2p_flat.o 00:19:00.528 CC lib/nvmf/tcp.o 00:19:00.787 CC lib/scsi/scsi_pr.o 00:19:00.787 CC lib/ftl/ftl_nv_cache.o 00:19:00.787 CC lib/ftl/ftl_band.o 00:19:01.047 CC lib/ftl/ftl_band_ops.o 00:19:01.047 CC lib/scsi/scsi_rpc.o 00:19:01.047 CC lib/ftl/ftl_writer.o 00:19:01.306 CC lib/ftl/ftl_rq.o 00:19:01.306 CC lib/scsi/task.o 00:19:01.306 CC lib/nvmf/stubs.o 00:19:01.306 CC lib/nvmf/mdns_server.o 00:19:01.306 CC lib/ftl/ftl_reloc.o 00:19:01.566 CC lib/ftl/ftl_l2p_cache.o 00:19:01.566 CC lib/nvmf/rdma.o 00:19:01.566 LIB libspdk_scsi.a 00:19:01.566 SO libspdk_scsi.so.9.0 00:19:01.825 CC lib/nvmf/auth.o 00:19:01.825 CC lib/ftl/ftl_p2l.o 00:19:01.825 SYMLINK libspdk_scsi.so 00:19:01.825 CC lib/ftl/ftl_p2l_log.o 00:19:01.825 CC lib/ftl/mngt/ftl_mngt.o 00:19:01.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:19:02.084 CC lib/vhost/vhost.o 00:19:02.084 CC lib/iscsi/conn.o 00:19:02.084 CC lib/iscsi/init_grp.o 00:19:02.084 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:19:02.084 CC lib/iscsi/iscsi.o 00:19:02.084 CC lib/ftl/mngt/ftl_mngt_startup.o 00:19:02.345 CC lib/ftl/mngt/ftl_mngt_md.o 00:19:02.345 CC lib/ftl/mngt/ftl_mngt_misc.o 00:19:02.345 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:19:02.345 CC lib/vhost/vhost_rpc.o 00:19:02.604 CC lib/vhost/vhost_scsi.o 00:19:02.604 CC lib/vhost/vhost_blk.o 00:19:02.604 CC lib/vhost/rte_vhost_user.o 00:19:02.604 CC lib/iscsi/param.o 00:19:02.604 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:19:02.604 CC lib/iscsi/portal_grp.o 00:19:02.864 CC lib/ftl/mngt/ftl_mngt_band.o 00:19:02.864 CC lib/iscsi/tgt_node.o 00:19:03.123 CC lib/iscsi/iscsi_subsystem.o 00:19:03.123 CC lib/iscsi/iscsi_rpc.o 00:19:03.123 CC lib/iscsi/task.o 00:19:03.123 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:19:03.123 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:19:03.383 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:19:03.383 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:19:03.383 CC lib/ftl/utils/ftl_conf.o 00:19:03.383 CC lib/ftl/utils/ftl_md.o 00:19:03.383 CC lib/ftl/utils/ftl_mempool.o 00:19:03.383 CC lib/ftl/utils/ftl_bitmap.o 00:19:03.643 CC lib/ftl/utils/ftl_property.o 00:19:03.643 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:19:03.643 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:19:03.643 LIB libspdk_vhost.a 00:19:03.643 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:19:03.643 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:19:03.643 SO libspdk_vhost.so.8.0 00:19:03.901 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:19:03.901 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:19:03.901 LIB libspdk_iscsi.a 00:19:03.901 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:19:03.901 SYMLINK libspdk_vhost.so 00:19:03.901 CC lib/ftl/upgrade/ftl_sb_v3.o 00:19:03.901 CC lib/ftl/upgrade/ftl_sb_v5.o 00:19:03.901 CC lib/ftl/nvc/ftl_nvc_dev.o 00:19:03.901 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:19:03.901 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:19:03.901 SO libspdk_iscsi.so.8.0 00:19:03.901 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:19:04.159 CC lib/ftl/base/ftl_base_dev.o 00:19:04.159 CC lib/ftl/base/ftl_base_bdev.o 00:19:04.159 CC lib/ftl/ftl_trace.o 00:19:04.159 SYMLINK libspdk_iscsi.so 00:19:04.159 LIB libspdk_nvmf.a 00:19:04.419 LIB libspdk_ftl.a 00:19:04.419 SO libspdk_nvmf.so.20.0 00:19:04.677 SO libspdk_ftl.so.9.0 00:19:04.678 SYMLINK libspdk_nvmf.so 00:19:04.937 SYMLINK libspdk_ftl.so 00:19:05.506 CC module/env_dpdk/env_dpdk_rpc.o 00:19:05.506 CC module/accel/error/accel_error.o 00:19:05.506 CC module/sock/posix/posix.o 00:19:05.506 CC module/scheduler/dynamic/scheduler_dynamic.o 00:19:05.506 CC module/blob/bdev/blob_bdev.o 00:19:05.506 CC module/accel/iaa/accel_iaa.o 00:19:05.506 CC module/accel/dsa/accel_dsa.o 00:19:05.506 CC module/fsdev/aio/fsdev_aio.o 00:19:05.506 CC module/accel/ioat/accel_ioat.o 00:19:05.506 CC module/keyring/file/keyring.o 00:19:05.506 LIB libspdk_env_dpdk_rpc.a 00:19:05.506 SO libspdk_env_dpdk_rpc.so.6.0 00:19:05.766 SYMLINK libspdk_env_dpdk_rpc.so 00:19:05.766 CC module/keyring/file/keyring_rpc.o 00:19:05.766 CC module/fsdev/aio/fsdev_aio_rpc.o 00:19:05.766 CC module/accel/ioat/accel_ioat_rpc.o 00:19:05.766 CC module/accel/error/accel_error_rpc.o 00:19:05.766 LIB libspdk_scheduler_dynamic.a 00:19:05.766 CC module/accel/iaa/accel_iaa_rpc.o 00:19:05.766 SO libspdk_scheduler_dynamic.so.4.0 00:19:05.766 LIB libspdk_blob_bdev.a 00:19:05.766 LIB libspdk_keyring_file.a 00:19:05.766 SYMLINK libspdk_scheduler_dynamic.so 00:19:05.766 CC module/accel/dsa/accel_dsa_rpc.o 00:19:05.766 SO libspdk_blob_bdev.so.12.0 00:19:05.766 SO libspdk_keyring_file.so.2.0 00:19:05.766 LIB libspdk_accel_error.a 00:19:06.026 LIB libspdk_accel_iaa.a 00:19:06.026 LIB libspdk_accel_ioat.a 00:19:06.026 SO libspdk_accel_error.so.2.0 00:19:06.026 SYMLINK libspdk_keyring_file.so 00:19:06.026 SO libspdk_accel_ioat.so.6.0 00:19:06.026 SYMLINK libspdk_blob_bdev.so 00:19:06.026 SO libspdk_accel_iaa.so.3.0 00:19:06.026 CC module/fsdev/aio/linux_aio_mgr.o 00:19:06.026 LIB libspdk_accel_dsa.a 00:19:06.026 SYMLINK libspdk_accel_iaa.so 00:19:06.026 SYMLINK libspdk_accel_error.so 00:19:06.026 SYMLINK libspdk_accel_ioat.so 00:19:06.026 SO libspdk_accel_dsa.so.5.0 00:19:06.026 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:19:06.026 SYMLINK libspdk_accel_dsa.so 00:19:06.026 CC module/scheduler/gscheduler/gscheduler.o 00:19:06.026 CC module/keyring/linux/keyring.o 00:19:06.026 CC module/keyring/linux/keyring_rpc.o 00:19:06.285 LIB libspdk_scheduler_dpdk_governor.a 00:19:06.285 LIB libspdk_scheduler_gscheduler.a 00:19:06.285 SO libspdk_scheduler_dpdk_governor.so.4.0 00:19:06.285 LIB libspdk_keyring_linux.a 00:19:06.285 LIB libspdk_fsdev_aio.a 00:19:06.285 SO libspdk_scheduler_gscheduler.so.4.0 00:19:06.285 CC module/bdev/delay/vbdev_delay.o 00:19:06.285 CC module/blobfs/bdev/blobfs_bdev.o 00:19:06.285 SO libspdk_keyring_linux.so.1.0 00:19:06.285 CC module/bdev/error/vbdev_error.o 00:19:06.285 SO libspdk_fsdev_aio.so.1.0 00:19:06.285 SYMLINK libspdk_scheduler_dpdk_governor.so 00:19:06.285 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:19:06.285 SYMLINK libspdk_scheduler_gscheduler.so 00:19:06.285 CC module/bdev/gpt/gpt.o 00:19:06.285 CC module/bdev/gpt/vbdev_gpt.o 00:19:06.545 LIB libspdk_sock_posix.a 00:19:06.545 SYMLINK libspdk_keyring_linux.so 00:19:06.545 CC module/bdev/delay/vbdev_delay_rpc.o 00:19:06.545 SYMLINK libspdk_fsdev_aio.so 00:19:06.545 CC module/bdev/error/vbdev_error_rpc.o 00:19:06.545 SO libspdk_sock_posix.so.6.0 00:19:06.545 CC module/bdev/lvol/vbdev_lvol.o 00:19:06.545 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:19:06.545 LIB libspdk_blobfs_bdev.a 00:19:06.545 SYMLINK libspdk_sock_posix.so 00:19:06.545 SO libspdk_blobfs_bdev.so.6.0 00:19:06.545 LIB libspdk_bdev_error.a 00:19:06.804 SYMLINK libspdk_blobfs_bdev.so 00:19:06.804 SO libspdk_bdev_error.so.6.0 00:19:06.804 LIB libspdk_bdev_gpt.a 00:19:06.804 SO libspdk_bdev_gpt.so.6.0 00:19:06.804 SYMLINK libspdk_bdev_error.so 00:19:06.804 CC module/bdev/null/bdev_null.o 00:19:06.804 CC module/bdev/null/bdev_null_rpc.o 00:19:06.804 LIB libspdk_bdev_delay.a 00:19:06.804 CC module/bdev/malloc/bdev_malloc.o 00:19:06.804 CC module/bdev/nvme/bdev_nvme.o 00:19:06.804 SO libspdk_bdev_delay.so.6.0 00:19:06.804 SYMLINK libspdk_bdev_gpt.so 00:19:06.804 CC module/bdev/passthru/vbdev_passthru.o 00:19:06.804 CC module/bdev/raid/bdev_raid.o 00:19:06.804 SYMLINK libspdk_bdev_delay.so 00:19:07.062 CC module/bdev/malloc/bdev_malloc_rpc.o 00:19:07.062 CC module/bdev/raid/bdev_raid_rpc.o 00:19:07.062 CC module/bdev/split/vbdev_split.o 00:19:07.062 LIB libspdk_bdev_lvol.a 00:19:07.062 CC module/bdev/zone_block/vbdev_zone_block.o 00:19:07.062 LIB libspdk_bdev_null.a 00:19:07.062 SO libspdk_bdev_lvol.so.6.0 00:19:07.062 SO libspdk_bdev_null.so.6.0 00:19:07.322 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:19:07.322 SYMLINK libspdk_bdev_lvol.so 00:19:07.322 CC module/bdev/raid/bdev_raid_sb.o 00:19:07.322 SYMLINK libspdk_bdev_null.so 00:19:07.322 CC module/bdev/nvme/bdev_nvme_rpc.o 00:19:07.322 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:19:07.322 LIB libspdk_bdev_malloc.a 00:19:07.322 SO libspdk_bdev_malloc.so.6.0 00:19:07.322 CC module/bdev/split/vbdev_split_rpc.o 00:19:07.322 CC module/bdev/aio/bdev_aio.o 00:19:07.322 SYMLINK libspdk_bdev_malloc.so 00:19:07.322 CC module/bdev/nvme/nvme_rpc.o 00:19:07.322 LIB libspdk_bdev_passthru.a 00:19:07.322 SO libspdk_bdev_passthru.so.6.0 00:19:07.322 CC module/bdev/raid/raid0.o 00:19:07.580 LIB libspdk_bdev_zone_block.a 00:19:07.580 LIB libspdk_bdev_split.a 00:19:07.580 SYMLINK libspdk_bdev_passthru.so 00:19:07.580 CC module/bdev/raid/raid1.o 00:19:07.580 CC module/bdev/raid/concat.o 00:19:07.580 SO libspdk_bdev_zone_block.so.6.0 00:19:07.580 SO libspdk_bdev_split.so.6.0 00:19:07.580 SYMLINK libspdk_bdev_zone_block.so 00:19:07.580 SYMLINK libspdk_bdev_split.so 00:19:07.580 CC module/bdev/raid/raid5f.o 00:19:07.580 CC module/bdev/nvme/bdev_mdns_client.o 00:19:07.838 CC module/bdev/aio/bdev_aio_rpc.o 00:19:07.838 CC module/bdev/nvme/vbdev_opal.o 00:19:07.838 CC module/bdev/nvme/vbdev_opal_rpc.o 00:19:07.838 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:19:07.838 LIB libspdk_bdev_aio.a 00:19:07.838 CC module/bdev/ftl/bdev_ftl.o 00:19:08.098 SO libspdk_bdev_aio.so.6.0 00:19:08.098 CC module/bdev/iscsi/bdev_iscsi.o 00:19:08.098 CC module/bdev/ftl/bdev_ftl_rpc.o 00:19:08.098 SYMLINK libspdk_bdev_aio.so 00:19:08.098 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:19:08.098 LIB libspdk_bdev_raid.a 00:19:08.098 CC module/bdev/virtio/bdev_virtio_scsi.o 00:19:08.098 CC module/bdev/virtio/bdev_virtio_rpc.o 00:19:08.098 CC module/bdev/virtio/bdev_virtio_blk.o 00:19:08.356 LIB libspdk_bdev_ftl.a 00:19:08.356 SO libspdk_bdev_raid.so.6.0 00:19:08.356 SO libspdk_bdev_ftl.so.6.0 00:19:08.356 SYMLINK libspdk_bdev_raid.so 00:19:08.356 SYMLINK libspdk_bdev_ftl.so 00:19:08.356 LIB libspdk_bdev_iscsi.a 00:19:08.614 SO libspdk_bdev_iscsi.so.6.0 00:19:08.614 SYMLINK libspdk_bdev_iscsi.so 00:19:08.870 LIB libspdk_bdev_virtio.a 00:19:08.870 SO libspdk_bdev_virtio.so.6.0 00:19:09.128 SYMLINK libspdk_bdev_virtio.so 00:19:10.068 LIB libspdk_bdev_nvme.a 00:19:10.068 SO libspdk_bdev_nvme.so.7.1 00:19:10.068 SYMLINK libspdk_bdev_nvme.so 00:19:10.637 CC module/event/subsystems/sock/sock.o 00:19:10.637 CC module/event/subsystems/keyring/keyring.o 00:19:10.637 CC module/event/subsystems/scheduler/scheduler.o 00:19:10.895 CC module/event/subsystems/vmd/vmd.o 00:19:10.895 CC module/event/subsystems/vmd/vmd_rpc.o 00:19:10.895 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:19:10.895 CC module/event/subsystems/fsdev/fsdev.o 00:19:10.895 CC module/event/subsystems/iobuf/iobuf.o 00:19:10.895 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:19:10.895 LIB libspdk_event_vhost_blk.a 00:19:10.895 LIB libspdk_event_keyring.a 00:19:10.895 LIB libspdk_event_sock.a 00:19:10.895 LIB libspdk_event_scheduler.a 00:19:10.895 LIB libspdk_event_fsdev.a 00:19:10.895 LIB libspdk_event_vmd.a 00:19:10.895 SO libspdk_event_keyring.so.1.0 00:19:10.895 SO libspdk_event_scheduler.so.4.0 00:19:10.895 SO libspdk_event_fsdev.so.1.0 00:19:10.895 SO libspdk_event_vhost_blk.so.3.0 00:19:10.895 SO libspdk_event_sock.so.5.0 00:19:10.895 SO libspdk_event_vmd.so.6.0 00:19:10.895 LIB libspdk_event_iobuf.a 00:19:10.895 SYMLINK libspdk_event_keyring.so 00:19:10.895 SYMLINK libspdk_event_scheduler.so 00:19:10.895 SYMLINK libspdk_event_fsdev.so 00:19:10.895 SO libspdk_event_iobuf.so.3.0 00:19:10.895 SYMLINK libspdk_event_vhost_blk.so 00:19:10.895 SYMLINK libspdk_event_vmd.so 00:19:10.895 SYMLINK libspdk_event_sock.so 00:19:11.154 SYMLINK libspdk_event_iobuf.so 00:19:11.412 CC module/event/subsystems/accel/accel.o 00:19:11.670 LIB libspdk_event_accel.a 00:19:11.670 SO libspdk_event_accel.so.6.0 00:19:11.670 SYMLINK libspdk_event_accel.so 00:19:12.238 CC module/event/subsystems/bdev/bdev.o 00:19:12.238 LIB libspdk_event_bdev.a 00:19:12.498 SO libspdk_event_bdev.so.6.0 00:19:12.499 SYMLINK libspdk_event_bdev.so 00:19:12.758 CC module/event/subsystems/scsi/scsi.o 00:19:12.758 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:19:12.758 CC module/event/subsystems/nbd/nbd.o 00:19:12.758 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:19:12.758 CC module/event/subsystems/ublk/ublk.o 00:19:13.016 LIB libspdk_event_nbd.a 00:19:13.016 LIB libspdk_event_scsi.a 00:19:13.016 SO libspdk_event_nbd.so.6.0 00:19:13.016 LIB libspdk_event_ublk.a 00:19:13.016 SO libspdk_event_scsi.so.6.0 00:19:13.016 SO libspdk_event_ublk.so.3.0 00:19:13.016 SYMLINK libspdk_event_nbd.so 00:19:13.016 LIB libspdk_event_nvmf.a 00:19:13.016 SYMLINK libspdk_event_scsi.so 00:19:13.016 SO libspdk_event_nvmf.so.6.0 00:19:13.016 SYMLINK libspdk_event_ublk.so 00:19:13.328 SYMLINK libspdk_event_nvmf.so 00:19:13.597 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:19:13.597 CC module/event/subsystems/iscsi/iscsi.o 00:19:13.597 LIB libspdk_event_vhost_scsi.a 00:19:13.597 SO libspdk_event_vhost_scsi.so.3.0 00:19:13.597 LIB libspdk_event_iscsi.a 00:19:13.855 SYMLINK libspdk_event_vhost_scsi.so 00:19:13.855 SO libspdk_event_iscsi.so.6.0 00:19:13.855 SYMLINK libspdk_event_iscsi.so 00:19:14.113 SO libspdk.so.6.0 00:19:14.113 SYMLINK libspdk.so 00:19:14.372 CC test/rpc_client/rpc_client_test.o 00:19:14.372 TEST_HEADER include/spdk/accel.h 00:19:14.372 TEST_HEADER include/spdk/accel_module.h 00:19:14.372 TEST_HEADER include/spdk/assert.h 00:19:14.372 TEST_HEADER include/spdk/barrier.h 00:19:14.372 TEST_HEADER include/spdk/base64.h 00:19:14.372 TEST_HEADER include/spdk/bdev.h 00:19:14.372 CXX app/trace/trace.o 00:19:14.372 TEST_HEADER include/spdk/bdev_module.h 00:19:14.372 TEST_HEADER include/spdk/bdev_zone.h 00:19:14.372 TEST_HEADER include/spdk/bit_array.h 00:19:14.372 TEST_HEADER include/spdk/bit_pool.h 00:19:14.372 CC examples/interrupt_tgt/interrupt_tgt.o 00:19:14.372 TEST_HEADER include/spdk/blob_bdev.h 00:19:14.372 TEST_HEADER include/spdk/blobfs_bdev.h 00:19:14.372 TEST_HEADER include/spdk/blobfs.h 00:19:14.372 TEST_HEADER include/spdk/blob.h 00:19:14.372 TEST_HEADER include/spdk/conf.h 00:19:14.372 TEST_HEADER include/spdk/config.h 00:19:14.372 TEST_HEADER include/spdk/cpuset.h 00:19:14.372 TEST_HEADER include/spdk/crc16.h 00:19:14.372 TEST_HEADER include/spdk/crc32.h 00:19:14.372 TEST_HEADER include/spdk/crc64.h 00:19:14.372 TEST_HEADER include/spdk/dif.h 00:19:14.372 TEST_HEADER include/spdk/dma.h 00:19:14.372 TEST_HEADER include/spdk/endian.h 00:19:14.372 TEST_HEADER include/spdk/env_dpdk.h 00:19:14.372 TEST_HEADER include/spdk/env.h 00:19:14.372 TEST_HEADER include/spdk/event.h 00:19:14.372 TEST_HEADER include/spdk/fd_group.h 00:19:14.372 TEST_HEADER include/spdk/fd.h 00:19:14.372 TEST_HEADER include/spdk/file.h 00:19:14.372 TEST_HEADER include/spdk/fsdev.h 00:19:14.372 TEST_HEADER include/spdk/fsdev_module.h 00:19:14.372 TEST_HEADER include/spdk/ftl.h 00:19:14.372 TEST_HEADER include/spdk/fuse_dispatcher.h 00:19:14.372 TEST_HEADER include/spdk/gpt_spec.h 00:19:14.372 TEST_HEADER include/spdk/hexlify.h 00:19:14.372 TEST_HEADER include/spdk/histogram_data.h 00:19:14.372 TEST_HEADER include/spdk/idxd.h 00:19:14.372 TEST_HEADER include/spdk/idxd_spec.h 00:19:14.372 TEST_HEADER include/spdk/init.h 00:19:14.372 CC test/thread/poller_perf/poller_perf.o 00:19:14.372 TEST_HEADER include/spdk/ioat.h 00:19:14.372 TEST_HEADER include/spdk/ioat_spec.h 00:19:14.372 CC examples/util/zipf/zipf.o 00:19:14.372 TEST_HEADER include/spdk/iscsi_spec.h 00:19:14.372 TEST_HEADER include/spdk/json.h 00:19:14.372 TEST_HEADER include/spdk/jsonrpc.h 00:19:14.372 TEST_HEADER include/spdk/keyring.h 00:19:14.372 TEST_HEADER include/spdk/keyring_module.h 00:19:14.372 TEST_HEADER include/spdk/likely.h 00:19:14.372 CC examples/ioat/perf/perf.o 00:19:14.372 TEST_HEADER include/spdk/log.h 00:19:14.372 TEST_HEADER include/spdk/lvol.h 00:19:14.372 TEST_HEADER include/spdk/md5.h 00:19:14.372 TEST_HEADER include/spdk/memory.h 00:19:14.372 TEST_HEADER include/spdk/mmio.h 00:19:14.372 TEST_HEADER include/spdk/nbd.h 00:19:14.372 TEST_HEADER include/spdk/net.h 00:19:14.372 TEST_HEADER include/spdk/notify.h 00:19:14.372 TEST_HEADER include/spdk/nvme.h 00:19:14.372 TEST_HEADER include/spdk/nvme_intel.h 00:19:14.372 TEST_HEADER include/spdk/nvme_ocssd.h 00:19:14.372 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:19:14.372 TEST_HEADER include/spdk/nvme_spec.h 00:19:14.372 CC test/dma/test_dma/test_dma.o 00:19:14.372 CC test/app/bdev_svc/bdev_svc.o 00:19:14.372 TEST_HEADER include/spdk/nvme_zns.h 00:19:14.372 TEST_HEADER include/spdk/nvmf_cmd.h 00:19:14.372 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:19:14.372 TEST_HEADER include/spdk/nvmf.h 00:19:14.372 TEST_HEADER include/spdk/nvmf_spec.h 00:19:14.372 TEST_HEADER include/spdk/nvmf_transport.h 00:19:14.372 TEST_HEADER include/spdk/opal.h 00:19:14.372 TEST_HEADER include/spdk/opal_spec.h 00:19:14.372 TEST_HEADER include/spdk/pci_ids.h 00:19:14.372 TEST_HEADER include/spdk/pipe.h 00:19:14.372 TEST_HEADER include/spdk/queue.h 00:19:14.372 TEST_HEADER include/spdk/reduce.h 00:19:14.631 TEST_HEADER include/spdk/rpc.h 00:19:14.631 TEST_HEADER include/spdk/scheduler.h 00:19:14.631 TEST_HEADER include/spdk/scsi.h 00:19:14.631 TEST_HEADER include/spdk/scsi_spec.h 00:19:14.631 TEST_HEADER include/spdk/sock.h 00:19:14.631 TEST_HEADER include/spdk/stdinc.h 00:19:14.631 CC test/env/mem_callbacks/mem_callbacks.o 00:19:14.631 TEST_HEADER include/spdk/string.h 00:19:14.631 TEST_HEADER include/spdk/thread.h 00:19:14.631 TEST_HEADER include/spdk/trace.h 00:19:14.631 TEST_HEADER include/spdk/trace_parser.h 00:19:14.631 TEST_HEADER include/spdk/tree.h 00:19:14.631 TEST_HEADER include/spdk/ublk.h 00:19:14.631 TEST_HEADER include/spdk/util.h 00:19:14.631 TEST_HEADER include/spdk/uuid.h 00:19:14.631 TEST_HEADER include/spdk/version.h 00:19:14.631 TEST_HEADER include/spdk/vfio_user_pci.h 00:19:14.631 LINK rpc_client_test 00:19:14.631 TEST_HEADER include/spdk/vfio_user_spec.h 00:19:14.631 TEST_HEADER include/spdk/vhost.h 00:19:14.631 TEST_HEADER include/spdk/vmd.h 00:19:14.631 TEST_HEADER include/spdk/xor.h 00:19:14.631 TEST_HEADER include/spdk/zipf.h 00:19:14.631 CXX test/cpp_headers/accel.o 00:19:14.631 LINK interrupt_tgt 00:19:14.631 LINK poller_perf 00:19:14.631 LINK zipf 00:19:14.631 LINK bdev_svc 00:19:14.631 LINK ioat_perf 00:19:14.631 CXX test/cpp_headers/accel_module.o 00:19:14.631 CXX test/cpp_headers/assert.o 00:19:14.890 CXX test/cpp_headers/barrier.o 00:19:14.890 CC examples/ioat/verify/verify.o 00:19:14.890 LINK spdk_trace 00:19:14.890 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:19:14.890 CXX test/cpp_headers/base64.o 00:19:14.890 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:19:14.890 CC app/trace_record/trace_record.o 00:19:14.890 CXX test/cpp_headers/bdev.o 00:19:14.890 LINK test_dma 00:19:14.890 LINK verify 00:19:15.149 CC app/nvmf_tgt/nvmf_main.o 00:19:15.149 LINK mem_callbacks 00:19:15.149 CC app/iscsi_tgt/iscsi_tgt.o 00:19:15.149 LINK nvmf_tgt 00:19:15.149 CXX test/cpp_headers/bdev_module.o 00:19:15.149 CC app/spdk_tgt/spdk_tgt.o 00:19:15.149 LINK spdk_trace_record 00:19:15.407 LINK iscsi_tgt 00:19:15.407 CC test/env/vtophys/vtophys.o 00:19:15.407 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:15.407 CC examples/thread/thread/thread_ex.o 00:19:15.407 CXX test/cpp_headers/bdev_zone.o 00:19:15.407 LINK nvme_fuzz 00:19:15.407 LINK spdk_tgt 00:19:15.407 LINK vtophys 00:19:15.407 CC test/env/memory/memory_ut.o 00:19:15.407 LINK env_dpdk_post_init 00:19:15.665 CC test/event/event_perf/event_perf.o 00:19:15.665 CC test/event/reactor/reactor.o 00:19:15.665 CXX test/cpp_headers/bit_array.o 00:19:15.665 LINK thread 00:19:15.665 CXX test/cpp_headers/bit_pool.o 00:19:15.665 CC test/event/reactor_perf/reactor_perf.o 00:19:15.665 LINK event_perf 00:19:15.665 LINK reactor 00:19:15.665 CC test/app/histogram_perf/histogram_perf.o 00:19:15.923 CC app/spdk_lspci/spdk_lspci.o 00:19:15.923 CXX test/cpp_headers/blob_bdev.o 00:19:15.923 CC test/app/jsoncat/jsoncat.o 00:19:15.923 LINK reactor_perf 00:19:15.923 LINK histogram_perf 00:19:15.923 LINK spdk_lspci 00:19:15.923 CC test/app/stub/stub.o 00:19:15.923 CC app/spdk_nvme_perf/perf.o 00:19:16.181 LINK jsoncat 00:19:16.181 CXX test/cpp_headers/blobfs_bdev.o 00:19:16.181 CC examples/sock/hello_world/hello_sock.o 00:19:16.181 CC test/event/app_repeat/app_repeat.o 00:19:16.181 LINK stub 00:19:16.181 CC test/event/scheduler/scheduler.o 00:19:16.181 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:16.181 CXX test/cpp_headers/blobfs.o 00:19:16.439 CC test/env/pci/pci_ut.o 00:19:16.439 LINK app_repeat 00:19:16.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:16.439 CXX test/cpp_headers/blob.o 00:19:16.439 LINK hello_sock 00:19:16.439 LINK scheduler 00:19:16.697 CC app/spdk_nvme_identify/identify.o 00:19:16.697 CXX test/cpp_headers/conf.o 00:19:16.697 CC app/spdk_nvme_discover/discovery_aer.o 00:19:16.697 CXX test/cpp_headers/config.o 00:19:16.697 LINK pci_ut 00:19:16.697 LINK memory_ut 00:19:16.697 CXX test/cpp_headers/cpuset.o 00:19:16.697 CC examples/vmd/lsvmd/lsvmd.o 00:19:16.956 LINK spdk_nvme_discover 00:19:16.956 LINK vhost_fuzz 00:19:16.956 CC examples/idxd/perf/perf.o 00:19:16.956 LINK lsvmd 00:19:16.956 CXX test/cpp_headers/crc16.o 00:19:16.956 LINK iscsi_fuzz 00:19:16.956 CXX test/cpp_headers/crc32.o 00:19:16.956 LINK spdk_nvme_perf 00:19:17.215 CC examples/vmd/led/led.o 00:19:17.215 CC app/spdk_top/spdk_top.o 00:19:17.215 CXX test/cpp_headers/crc64.o 00:19:17.215 CC test/nvme/aer/aer.o 00:19:17.215 LINK idxd_perf 00:19:17.472 CC test/nvme/reset/reset.o 00:19:17.473 LINK led 00:19:17.473 CC test/blobfs/mkfs/mkfs.o 00:19:17.473 CC test/nvme/sgl/sgl.o 00:19:17.473 CC test/accel/dif/dif.o 00:19:17.473 CXX test/cpp_headers/dif.o 00:19:17.473 CXX test/cpp_headers/dma.o 00:19:17.473 LINK mkfs 00:19:17.731 LINK aer 00:19:17.731 LINK reset 00:19:17.731 LINK sgl 00:19:17.731 CC examples/fsdev/hello_world/hello_fsdev.o 00:19:17.731 LINK spdk_nvme_identify 00:19:17.731 CXX test/cpp_headers/endian.o 00:19:17.731 CXX test/cpp_headers/env_dpdk.o 00:19:17.731 CC test/lvol/esnap/esnap.o 00:19:17.990 CC test/nvme/e2edp/nvme_dp.o 00:19:17.990 CC test/nvme/overhead/overhead.o 00:19:17.990 CC test/nvme/err_injection/err_injection.o 00:19:17.990 CXX test/cpp_headers/env.o 00:19:17.990 LINK hello_fsdev 00:19:17.990 CC test/nvme/startup/startup.o 00:19:17.990 CC app/vhost/vhost.o 00:19:18.289 CXX test/cpp_headers/event.o 00:19:18.289 LINK dif 00:19:18.289 LINK err_injection 00:19:18.289 LINK startup 00:19:18.289 LINK nvme_dp 00:19:18.289 LINK spdk_top 00:19:18.289 LINK overhead 00:19:18.289 LINK vhost 00:19:18.289 CXX test/cpp_headers/fd_group.o 00:19:18.548 CC examples/accel/perf/accel_perf.o 00:19:18.548 CXX test/cpp_headers/fd.o 00:19:18.548 CXX test/cpp_headers/file.o 00:19:18.548 CXX test/cpp_headers/fsdev.o 00:19:18.548 CC test/nvme/reserve/reserve.o 00:19:18.548 CC examples/nvme/hello_world/hello_world.o 00:19:18.548 CC examples/blob/hello_world/hello_blob.o 00:19:18.548 CC examples/nvme/reconnect/reconnect.o 00:19:18.548 CXX test/cpp_headers/fsdev_module.o 00:19:18.806 CC app/spdk_dd/spdk_dd.o 00:19:18.806 CC examples/nvme/nvme_manage/nvme_manage.o 00:19:18.806 CC app/fio/nvme/fio_plugin.o 00:19:18.806 LINK reserve 00:19:18.806 CXX test/cpp_headers/ftl.o 00:19:18.806 LINK hello_world 00:19:18.806 LINK hello_blob 00:19:19.064 LINK reconnect 00:19:19.064 CXX test/cpp_headers/fuse_dispatcher.o 00:19:19.064 LINK accel_perf 00:19:19.064 LINK spdk_dd 00:19:19.064 CC test/nvme/simple_copy/simple_copy.o 00:19:19.064 CC test/nvme/connect_stress/connect_stress.o 00:19:19.322 CXX test/cpp_headers/gpt_spec.o 00:19:19.322 CC examples/blob/cli/blobcli.o 00:19:19.322 CC examples/nvme/arbitration/arbitration.o 00:19:19.322 LINK nvme_manage 00:19:19.322 LINK connect_stress 00:19:19.322 CC examples/nvme/hotplug/hotplug.o 00:19:19.322 CXX test/cpp_headers/hexlify.o 00:19:19.322 LINK simple_copy 00:19:19.581 LINK spdk_nvme 00:19:19.581 CXX test/cpp_headers/histogram_data.o 00:19:19.581 CC examples/bdev/hello_world/hello_bdev.o 00:19:19.581 LINK hotplug 00:19:19.581 CC examples/bdev/bdevperf/bdevperf.o 00:19:19.581 CC app/fio/bdev/fio_plugin.o 00:19:19.581 CC test/nvme/boot_partition/boot_partition.o 00:19:19.841 CC test/nvme/compliance/nvme_compliance.o 00:19:19.841 LINK arbitration 00:19:19.841 CXX test/cpp_headers/idxd.o 00:19:19.841 LINK blobcli 00:19:19.841 LINK hello_bdev 00:19:19.841 CXX test/cpp_headers/idxd_spec.o 00:19:19.841 LINK boot_partition 00:19:20.101 CC examples/nvme/cmb_copy/cmb_copy.o 00:19:20.101 CXX test/cpp_headers/init.o 00:19:20.101 CC examples/nvme/abort/abort.o 00:19:20.101 CXX test/cpp_headers/ioat.o 00:19:20.101 CXX test/cpp_headers/ioat_spec.o 00:19:20.101 LINK nvme_compliance 00:19:20.101 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:19:20.101 LINK cmb_copy 00:19:20.360 LINK spdk_bdev 00:19:20.360 CXX test/cpp_headers/iscsi_spec.o 00:19:20.360 CC test/nvme/fused_ordering/fused_ordering.o 00:19:20.360 LINK pmr_persistence 00:19:20.360 CXX test/cpp_headers/json.o 00:19:20.360 CC test/bdev/bdevio/bdevio.o 00:19:20.360 CC test/nvme/doorbell_aers/doorbell_aers.o 00:19:20.360 CXX test/cpp_headers/jsonrpc.o 00:19:20.360 LINK abort 00:19:20.620 CC test/nvme/fdp/fdp.o 00:19:20.620 CXX test/cpp_headers/keyring.o 00:19:20.620 CXX test/cpp_headers/keyring_module.o 00:19:20.620 LINK fused_ordering 00:19:20.620 CXX test/cpp_headers/likely.o 00:19:20.620 LINK bdevperf 00:19:20.620 CXX test/cpp_headers/log.o 00:19:20.620 LINK doorbell_aers 00:19:20.879 CXX test/cpp_headers/lvol.o 00:19:20.879 CXX test/cpp_headers/md5.o 00:19:20.879 CXX test/cpp_headers/memory.o 00:19:20.879 CXX test/cpp_headers/mmio.o 00:19:20.879 CC test/nvme/cuse/cuse.o 00:19:20.879 CXX test/cpp_headers/nbd.o 00:19:20.879 LINK fdp 00:19:20.879 CXX test/cpp_headers/net.o 00:19:20.879 CXX test/cpp_headers/notify.o 00:19:20.879 LINK bdevio 00:19:20.879 CXX test/cpp_headers/nvme.o 00:19:21.137 CXX test/cpp_headers/nvme_intel.o 00:19:21.137 CXX test/cpp_headers/nvme_ocssd.o 00:19:21.137 CXX test/cpp_headers/nvme_ocssd_spec.o 00:19:21.137 CXX test/cpp_headers/nvme_spec.o 00:19:21.137 CC examples/nvmf/nvmf/nvmf.o 00:19:21.137 CXX test/cpp_headers/nvme_zns.o 00:19:21.137 CXX test/cpp_headers/nvmf_cmd.o 00:19:21.137 CXX test/cpp_headers/nvmf_fc_spec.o 00:19:21.137 CXX test/cpp_headers/nvmf.o 00:19:21.137 CXX test/cpp_headers/nvmf_spec.o 00:19:21.137 CXX test/cpp_headers/nvmf_transport.o 00:19:21.396 CXX test/cpp_headers/opal.o 00:19:21.396 CXX test/cpp_headers/opal_spec.o 00:19:21.396 CXX test/cpp_headers/pci_ids.o 00:19:21.396 CXX test/cpp_headers/pipe.o 00:19:21.396 CXX test/cpp_headers/queue.o 00:19:21.396 CXX test/cpp_headers/reduce.o 00:19:21.396 CXX test/cpp_headers/rpc.o 00:19:21.396 LINK nvmf 00:19:21.396 CXX test/cpp_headers/scheduler.o 00:19:21.655 CXX test/cpp_headers/scsi.o 00:19:21.655 CXX test/cpp_headers/scsi_spec.o 00:19:21.655 CXX test/cpp_headers/sock.o 00:19:21.655 CXX test/cpp_headers/stdinc.o 00:19:21.655 CXX test/cpp_headers/string.o 00:19:21.655 CXX test/cpp_headers/thread.o 00:19:21.655 CXX test/cpp_headers/trace.o 00:19:21.655 CXX test/cpp_headers/trace_parser.o 00:19:21.655 CXX test/cpp_headers/tree.o 00:19:21.655 CXX test/cpp_headers/ublk.o 00:19:21.655 CXX test/cpp_headers/util.o 00:19:21.655 CXX test/cpp_headers/uuid.o 00:19:21.655 CXX test/cpp_headers/version.o 00:19:21.655 CXX test/cpp_headers/vfio_user_pci.o 00:19:21.914 CXX test/cpp_headers/vfio_user_spec.o 00:19:21.914 CXX test/cpp_headers/vhost.o 00:19:21.914 CXX test/cpp_headers/vmd.o 00:19:21.914 CXX test/cpp_headers/xor.o 00:19:21.914 CXX test/cpp_headers/zipf.o 00:19:22.480 LINK cuse 00:19:24.410 LINK esnap 00:19:24.976 00:19:24.976 real 1m36.964s 00:19:24.976 user 8m18.351s 00:19:24.976 sys 1m59.735s 00:19:24.976 17:15:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:19:24.976 17:15:55 make -- common/autotest_common.sh@10 -- $ set +x 00:19:24.976 ************************************ 00:19:24.976 END TEST make 00:19:24.976 ************************************ 00:19:24.976 17:15:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:19:24.976 17:15:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:24.976 17:15:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:24.976 17:15:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:24.976 17:15:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:24.976 17:15:55 -- pm/common@44 -- $ pid=5256 00:19:24.976 17:15:55 -- pm/common@50 -- $ kill -TERM 5256 00:19:24.976 17:15:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:24.976 17:15:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:25.235 17:15:55 -- pm/common@44 -- $ pid=5257 00:19:25.235 17:15:55 -- pm/common@50 -- $ kill -TERM 5257 00:19:25.235 17:15:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:19:25.235 17:15:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:25.235 17:15:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:25.235 17:15:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:19:25.235 17:15:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:25.235 17:15:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:25.235 17:15:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.235 17:15:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.235 17:15:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.235 17:15:55 -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.235 17:15:55 -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.235 17:15:55 -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.235 17:15:55 -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.235 17:15:55 -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.235 17:15:55 -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.235 17:15:55 -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.235 17:15:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.235 17:15:55 -- scripts/common.sh@344 -- # case "$op" in 00:19:25.235 17:15:55 -- scripts/common.sh@345 -- # : 1 00:19:25.235 17:15:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.235 17:15:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.235 17:15:55 -- scripts/common.sh@365 -- # decimal 1 00:19:25.235 17:15:55 -- scripts/common.sh@353 -- # local d=1 00:19:25.235 17:15:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.235 17:15:55 -- scripts/common.sh@355 -- # echo 1 00:19:25.235 17:15:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.235 17:15:55 -- scripts/common.sh@366 -- # decimal 2 00:19:25.235 17:15:55 -- scripts/common.sh@353 -- # local d=2 00:19:25.235 17:15:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.235 17:15:55 -- scripts/common.sh@355 -- # echo 2 00:19:25.235 17:15:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.235 17:15:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.235 17:15:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.235 17:15:55 -- scripts/common.sh@368 -- # return 0 00:19:25.235 17:15:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.235 17:15:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:25.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.235 --rc genhtml_branch_coverage=1 00:19:25.235 --rc genhtml_function_coverage=1 00:19:25.235 --rc genhtml_legend=1 00:19:25.235 --rc geninfo_all_blocks=1 00:19:25.235 --rc geninfo_unexecuted_blocks=1 00:19:25.235 00:19:25.235 ' 00:19:25.235 17:15:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:25.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.235 --rc genhtml_branch_coverage=1 00:19:25.235 --rc genhtml_function_coverage=1 00:19:25.235 --rc genhtml_legend=1 00:19:25.235 --rc geninfo_all_blocks=1 00:19:25.235 --rc geninfo_unexecuted_blocks=1 00:19:25.235 00:19:25.235 ' 00:19:25.235 17:15:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:25.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.235 --rc genhtml_branch_coverage=1 00:19:25.235 --rc genhtml_function_coverage=1 00:19:25.235 --rc genhtml_legend=1 00:19:25.235 --rc geninfo_all_blocks=1 00:19:25.235 --rc geninfo_unexecuted_blocks=1 00:19:25.235 00:19:25.235 ' 00:19:25.235 17:15:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:25.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.235 --rc genhtml_branch_coverage=1 00:19:25.235 --rc genhtml_function_coverage=1 00:19:25.235 --rc genhtml_legend=1 00:19:25.235 --rc geninfo_all_blocks=1 00:19:25.235 --rc geninfo_unexecuted_blocks=1 00:19:25.235 00:19:25.235 ' 00:19:25.235 17:15:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.235 17:15:55 -- nvmf/common.sh@7 -- # uname -s 00:19:25.235 17:15:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.235 17:15:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.235 17:15:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.235 17:15:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.235 17:15:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.235 17:15:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.235 17:15:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.235 17:15:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.236 17:15:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.236 17:15:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.495 17:15:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:19:25.495 17:15:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:19:25.495 17:15:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.495 17:15:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.495 17:15:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:25.495 17:15:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.495 17:15:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.495 17:15:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.495 17:15:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.495 17:15:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.495 17:15:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.495 17:15:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.495 17:15:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.495 17:15:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.495 17:15:55 -- paths/export.sh@5 -- # export PATH 00:19:25.495 17:15:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.495 17:15:55 -- nvmf/common.sh@51 -- # : 0 00:19:25.495 17:15:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.495 17:15:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.495 17:15:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.495 17:15:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.495 17:15:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.495 17:15:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.495 17:15:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.495 17:15:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.495 17:15:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.495 17:15:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:19:25.495 17:15:55 -- spdk/autotest.sh@32 -- # uname -s 00:19:25.495 17:15:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:19:25.495 17:15:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:19:25.495 17:15:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:25.495 17:15:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:19:25.495 17:15:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:25.495 17:15:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:19:25.495 17:15:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:19:25.495 17:15:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:19:25.495 17:15:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54353 00:19:25.495 17:15:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:19:25.495 17:15:55 -- pm/common@17 -- # local monitor 00:19:25.495 17:15:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:25.495 17:15:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:19:25.495 17:15:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:25.495 17:15:55 -- pm/common@21 -- # date +%s 00:19:25.495 17:15:55 -- pm/common@21 -- # date +%s 00:19:25.495 17:15:55 -- pm/common@25 -- # sleep 1 00:19:25.495 17:15:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641355 00:19:25.495 17:15:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641355 00:19:25.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641355_collect-vmstat.pm.log 00:19:25.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641355_collect-cpu-load.pm.log 00:19:26.433 17:15:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:19:26.433 17:15:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:19:26.433 17:15:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.433 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.433 17:15:56 -- spdk/autotest.sh@59 -- # create_test_list 00:19:26.433 17:15:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:19:26.433 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.433 17:15:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:19:26.433 17:15:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:19:26.433 17:15:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:19:26.433 17:15:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:19:26.433 17:15:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:19:26.691 17:15:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:19:26.691 17:15:56 -- common/autotest_common.sh@1457 -- # uname 00:19:26.691 17:15:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:19:26.691 17:15:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:19:26.691 17:15:56 -- common/autotest_common.sh@1477 -- # uname 00:19:26.691 17:15:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:19:26.691 17:15:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:19:26.691 17:15:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:19:26.691 lcov: LCOV version 1.15 00:19:26.691 17:15:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:19:44.819 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:19:44.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:19:59.712 17:16:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:19:59.712 17:16:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.712 17:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:59.712 17:16:28 -- spdk/autotest.sh@78 -- # rm -f 00:19:59.712 17:16:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:59.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.712 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:59.712 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:59.712 17:16:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:19:59.713 17:16:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:59.713 17:16:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:59.713 17:16:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:19:59.713 17:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:59.713 17:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:19:59.713 17:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:59.713 17:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:59.713 17:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:19:59.713 17:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:59.713 17:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:59.713 17:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:19:59.713 17:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:19:59.713 17:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:59.713 17:16:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:19:59.713 17:16:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:19:59.713 17:16:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:19:59.713 17:16:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:59.713 17:16:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:19:59.713 17:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:19:59.713 17:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:19:59.713 17:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:19:59.713 17:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:59.713 17:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:59.713 No valid GPT data, bailing 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # pt= 00:19:59.713 17:16:29 -- scripts/common.sh@395 -- # return 1 00:19:59.713 17:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:19:59.713 1+0 records in 00:19:59.713 1+0 records out 00:19:59.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436811 s, 240 MB/s 00:19:59.713 17:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:19:59.713 17:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:19:59.713 17:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:19:59.713 17:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:19:59.713 17:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:19:59.713 No valid GPT data, bailing 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # pt= 00:19:59.713 17:16:29 -- scripts/common.sh@395 -- # return 1 00:19:59.713 17:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:19:59.713 1+0 records in 00:19:59.713 1+0 records out 00:19:59.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00358448 s, 293 MB/s 00:19:59.713 17:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:19:59.713 17:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:19:59.713 17:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:19:59.713 17:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:19:59.713 17:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:19:59.713 No valid GPT data, bailing 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # pt= 00:19:59.713 17:16:29 -- scripts/common.sh@395 -- # return 1 00:19:59.713 17:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:19:59.713 1+0 records in 00:19:59.713 1+0 records out 00:19:59.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065623 s, 160 MB/s 00:19:59.713 17:16:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:19:59.713 17:16:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:19:59.713 17:16:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:19:59.713 17:16:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:19:59.713 17:16:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:19:59.713 No valid GPT data, bailing 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:59.713 17:16:29 -- scripts/common.sh@394 -- # pt= 00:19:59.713 17:16:29 -- scripts/common.sh@395 -- # return 1 00:19:59.713 17:16:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:19:59.713 1+0 records in 00:19:59.713 1+0 records out 00:19:59.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601792 s, 174 MB/s 00:19:59.713 17:16:29 -- spdk/autotest.sh@105 -- # sync 00:19:59.713 17:16:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:19:59.713 17:16:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:19:59.713 17:16:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:03.006 17:16:32 -- spdk/autotest.sh@111 -- # uname -s 00:20:03.006 17:16:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:20:03.006 17:16:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:20:03.006 17:16:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:03.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.574 Hugepages 00:20:03.574 node hugesize free / total 00:20:03.574 node0 1048576kB 0 / 0 00:20:03.574 node0 2048kB 0 / 0 00:20:03.574 00:20:03.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:03.574 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:03.574 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:03.944 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:03.944 17:16:33 -- spdk/autotest.sh@117 -- # uname -s 00:20:03.944 17:16:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:20:03.944 17:16:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:20:03.944 17:16:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:04.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:04.771 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.771 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.771 17:16:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:20:06.149 17:16:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:20:06.149 17:16:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:20:06.149 17:16:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:20:06.149 17:16:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:20:06.150 17:16:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:06.150 17:16:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:06.150 17:16:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:06.150 17:16:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:06.150 17:16:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:06.150 17:16:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:06.150 17:16:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:06.150 17:16:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:06.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.408 Waiting for block devices as requested 00:20:06.666 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.666 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.666 17:16:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:06.666 17:16:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:20:06.666 17:16:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:06.666 17:16:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:20:06.666 17:16:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:06.924 17:16:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:06.924 17:16:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:06.924 17:16:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1543 -- # continue 00:20:06.924 17:16:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:06.924 17:16:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:06.924 17:16:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:20:06.924 17:16:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:06.924 17:16:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:06.924 17:16:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:06.924 17:16:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:06.924 17:16:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:06.924 17:16:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:06.924 17:16:36 -- common/autotest_common.sh@1543 -- # continue 00:20:06.924 17:16:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:20:06.924 17:16:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.924 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.924 17:16:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:20:06.924 17:16:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.924 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.924 17:16:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:07.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.857 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.857 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:08.116 17:16:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:20:08.116 17:16:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.116 17:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:08.116 17:16:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:20:08.116 17:16:38 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:20:08.116 17:16:38 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:20:08.116 17:16:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:20:08.116 17:16:38 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:20:08.116 17:16:38 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:20:08.116 17:16:38 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:20:08.116 17:16:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:20:08.116 17:16:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:08.116 17:16:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:08.116 17:16:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:08.116 17:16:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:08.116 17:16:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:08.116 17:16:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:08.116 17:16:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:08.116 17:16:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:08.116 17:16:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:20:08.116 17:16:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:08.116 17:16:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:08.116 17:16:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:08.116 17:16:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:20:08.116 17:16:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:08.116 17:16:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:08.116 17:16:38 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:20:08.116 17:16:38 -- common/autotest_common.sh@1572 -- # return 0 00:20:08.116 17:16:38 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:20:08.116 17:16:38 -- common/autotest_common.sh@1580 -- # return 0 00:20:08.116 17:16:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:20:08.116 17:16:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:20:08.116 17:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:08.116 17:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:08.116 17:16:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:20:08.116 17:16:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.116 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.116 17:16:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:20:08.116 17:16:38 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:08.116 17:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.116 17:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.116 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.116 ************************************ 00:20:08.116 START TEST env 00:20:08.116 ************************************ 00:20:08.116 17:16:38 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:08.374 * Looking for test storage... 00:20:08.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:20:08.374 17:16:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:08.374 17:16:38 env -- common/autotest_common.sh@1693 -- # lcov --version 00:20:08.374 17:16:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:08.374 17:16:38 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:08.374 17:16:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.374 17:16:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.374 17:16:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.374 17:16:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.374 17:16:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.374 17:16:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.375 17:16:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.375 17:16:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.375 17:16:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.375 17:16:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.375 17:16:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.375 17:16:38 env -- scripts/common.sh@344 -- # case "$op" in 00:20:08.375 17:16:38 env -- scripts/common.sh@345 -- # : 1 00:20:08.375 17:16:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.375 17:16:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.375 17:16:38 env -- scripts/common.sh@365 -- # decimal 1 00:20:08.375 17:16:38 env -- scripts/common.sh@353 -- # local d=1 00:20:08.375 17:16:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.375 17:16:38 env -- scripts/common.sh@355 -- # echo 1 00:20:08.375 17:16:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.375 17:16:38 env -- scripts/common.sh@366 -- # decimal 2 00:20:08.375 17:16:38 env -- scripts/common.sh@353 -- # local d=2 00:20:08.375 17:16:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.375 17:16:38 env -- scripts/common.sh@355 -- # echo 2 00:20:08.375 17:16:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.375 17:16:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.375 17:16:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.375 17:16:38 env -- scripts/common.sh@368 -- # return 0 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:08.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.375 --rc genhtml_branch_coverage=1 00:20:08.375 --rc genhtml_function_coverage=1 00:20:08.375 --rc genhtml_legend=1 00:20:08.375 --rc geninfo_all_blocks=1 00:20:08.375 --rc geninfo_unexecuted_blocks=1 00:20:08.375 00:20:08.375 ' 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:08.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.375 --rc genhtml_branch_coverage=1 00:20:08.375 --rc genhtml_function_coverage=1 00:20:08.375 --rc genhtml_legend=1 00:20:08.375 --rc geninfo_all_blocks=1 00:20:08.375 --rc geninfo_unexecuted_blocks=1 00:20:08.375 00:20:08.375 ' 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:08.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.375 --rc genhtml_branch_coverage=1 00:20:08.375 --rc genhtml_function_coverage=1 00:20:08.375 --rc genhtml_legend=1 00:20:08.375 --rc geninfo_all_blocks=1 00:20:08.375 --rc geninfo_unexecuted_blocks=1 00:20:08.375 00:20:08.375 ' 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:08.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.375 --rc genhtml_branch_coverage=1 00:20:08.375 --rc genhtml_function_coverage=1 00:20:08.375 --rc genhtml_legend=1 00:20:08.375 --rc geninfo_all_blocks=1 00:20:08.375 --rc geninfo_unexecuted_blocks=1 00:20:08.375 00:20:08.375 ' 00:20:08.375 17:16:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.375 17:16:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.375 17:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:20:08.375 ************************************ 00:20:08.375 START TEST env_memory 00:20:08.375 ************************************ 00:20:08.375 17:16:38 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:08.375 00:20:08.375 00:20:08.375 CUnit - A unit testing framework for C - Version 2.1-3 00:20:08.375 http://cunit.sourceforge.net/ 00:20:08.375 00:20:08.375 00:20:08.375 Suite: memory 00:20:08.634 Test: alloc and free memory map ...[2024-11-26 17:16:38.518215] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:20:08.634 passed 00:20:08.634 Test: mem map translation ...[2024-11-26 17:16:38.563936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:20:08.634 [2024-11-26 17:16:38.564186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:20:08.634 [2024-11-26 17:16:38.564352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:20:08.634 [2024-11-26 17:16:38.564490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:20:08.634 passed 00:20:08.634 Test: mem map registration ...[2024-11-26 17:16:38.633252] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:20:08.634 [2024-11-26 17:16:38.633506] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:20:08.634 passed 00:20:08.634 Test: mem map adjacent registrations ...passed 00:20:08.634 00:20:08.634 Run Summary: Type Total Ran Passed Failed Inactive 00:20:08.634 suites 1 1 n/a 0 0 00:20:08.634 tests 4 4 4 0 0 00:20:08.634 asserts 152 152 152 0 n/a 00:20:08.634 00:20:08.634 Elapsed time = 0.246 seconds 00:20:08.634 00:20:08.634 real 0m0.289s 00:20:08.634 user 0m0.251s 00:20:08.634 sys 0m0.028s 00:20:08.635 17:16:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.635 17:16:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:20:08.635 ************************************ 00:20:08.635 END TEST env_memory 00:20:08.635 ************************************ 00:20:08.896 17:16:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:08.896 17:16:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.896 17:16:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.896 17:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:20:08.896 ************************************ 00:20:08.896 START TEST env_vtophys 00:20:08.896 ************************************ 00:20:08.896 17:16:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:08.896 EAL: lib.eal log level changed from notice to debug 00:20:08.896 EAL: Detected lcore 0 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 1 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 2 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 3 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 4 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 5 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 6 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 7 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 8 as core 0 on socket 0 00:20:08.896 EAL: Detected lcore 9 as core 0 on socket 0 00:20:08.896 EAL: Maximum logical cores by configuration: 128 00:20:08.896 EAL: Detected CPU lcores: 10 00:20:08.896 EAL: Detected NUMA nodes: 1 00:20:08.896 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:20:08.896 EAL: Detected shared linkage of DPDK 00:20:08.896 EAL: No shared files mode enabled, IPC will be disabled 00:20:08.896 EAL: Selected IOVA mode 'PA' 00:20:08.896 EAL: Probing VFIO support... 00:20:08.896 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:08.896 EAL: VFIO modules not loaded, skipping VFIO support... 00:20:08.896 EAL: Ask a virtual area of 0x2e000 bytes 00:20:08.896 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:20:08.896 EAL: Setting up physically contiguous memory... 00:20:08.896 EAL: Setting maximum number of open files to 524288 00:20:08.896 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:20:08.896 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:20:08.896 EAL: Ask a virtual area of 0x61000 bytes 00:20:08.896 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:20:08.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:08.896 EAL: Ask a virtual area of 0x400000000 bytes 00:20:08.896 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:20:08.896 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:20:08.896 EAL: Ask a virtual area of 0x61000 bytes 00:20:08.896 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:20:08.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:08.896 EAL: Ask a virtual area of 0x400000000 bytes 00:20:08.896 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:20:08.896 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:20:08.896 EAL: Ask a virtual area of 0x61000 bytes 00:20:08.896 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:20:08.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:08.896 EAL: Ask a virtual area of 0x400000000 bytes 00:20:08.896 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:20:08.896 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:20:08.896 EAL: Ask a virtual area of 0x61000 bytes 00:20:08.896 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:20:08.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:08.896 EAL: Ask a virtual area of 0x400000000 bytes 00:20:08.896 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:20:08.896 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:20:08.896 EAL: Hugepages will be freed exactly as allocated. 00:20:08.896 EAL: No shared files mode enabled, IPC is disabled 00:20:08.896 EAL: No shared files mode enabled, IPC is disabled 00:20:09.155 EAL: TSC frequency is ~2490000 KHz 00:20:09.155 EAL: Main lcore 0 is ready (tid=7f999a2d1a40;cpuset=[0]) 00:20:09.155 EAL: Trying to obtain current memory policy. 00:20:09.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.155 EAL: Restoring previous memory policy: 0 00:20:09.155 EAL: request: mp_malloc_sync 00:20:09.155 EAL: No shared files mode enabled, IPC is disabled 00:20:09.155 EAL: Heap on socket 0 was expanded by 2MB 00:20:09.155 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:09.155 EAL: No PCI address specified using 'addr=' in: bus=pci 00:20:09.155 EAL: Mem event callback 'spdk:(nil)' registered 00:20:09.155 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:20:09.155 00:20:09.155 00:20:09.155 CUnit - A unit testing framework for C - Version 2.1-3 00:20:09.155 http://cunit.sourceforge.net/ 00:20:09.155 00:20:09.155 00:20:09.155 Suite: components_suite 00:20:09.724 Test: vtophys_malloc_test ...passed 00:20:09.724 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:20:09.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.724 EAL: Restoring previous memory policy: 4 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.724 EAL: No shared files mode enabled, IPC is disabled 00:20:09.724 EAL: Heap on socket 0 was expanded by 4MB 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.724 EAL: No shared files mode enabled, IPC is disabled 00:20:09.724 EAL: Heap on socket 0 was shrunk by 4MB 00:20:09.724 EAL: Trying to obtain current memory policy. 00:20:09.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.724 EAL: Restoring previous memory policy: 4 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.724 EAL: No shared files mode enabled, IPC is disabled 00:20:09.724 EAL: Heap on socket 0 was expanded by 6MB 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.724 EAL: No shared files mode enabled, IPC is disabled 00:20:09.724 EAL: Heap on socket 0 was shrunk by 6MB 00:20:09.724 EAL: Trying to obtain current memory policy. 00:20:09.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.724 EAL: Restoring previous memory policy: 4 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.724 EAL: No shared files mode enabled, IPC is disabled 00:20:09.724 EAL: Heap on socket 0 was expanded by 10MB 00:20:09.724 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.724 EAL: request: mp_malloc_sync 00:20:09.725 EAL: No shared files mode enabled, IPC is disabled 00:20:09.725 EAL: Heap on socket 0 was shrunk by 10MB 00:20:09.725 EAL: Trying to obtain current memory policy. 00:20:09.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.725 EAL: Restoring previous memory policy: 4 00:20:09.725 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.725 EAL: request: mp_malloc_sync 00:20:09.725 EAL: No shared files mode enabled, IPC is disabled 00:20:09.725 EAL: Heap on socket 0 was expanded by 18MB 00:20:09.725 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.725 EAL: request: mp_malloc_sync 00:20:09.725 EAL: No shared files mode enabled, IPC is disabled 00:20:09.725 EAL: Heap on socket 0 was shrunk by 18MB 00:20:09.725 EAL: Trying to obtain current memory policy. 00:20:09.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.725 EAL: Restoring previous memory policy: 4 00:20:09.725 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.725 EAL: request: mp_malloc_sync 00:20:09.725 EAL: No shared files mode enabled, IPC is disabled 00:20:09.725 EAL: Heap on socket 0 was expanded by 34MB 00:20:09.725 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.725 EAL: request: mp_malloc_sync 00:20:09.725 EAL: No shared files mode enabled, IPC is disabled 00:20:09.725 EAL: Heap on socket 0 was shrunk by 34MB 00:20:09.984 EAL: Trying to obtain current memory policy. 00:20:09.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:09.984 EAL: Restoring previous memory policy: 4 00:20:09.984 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.984 EAL: request: mp_malloc_sync 00:20:09.984 EAL: No shared files mode enabled, IPC is disabled 00:20:09.984 EAL: Heap on socket 0 was expanded by 66MB 00:20:09.984 EAL: Calling mem event callback 'spdk:(nil)' 00:20:09.984 EAL: request: mp_malloc_sync 00:20:09.984 EAL: No shared files mode enabled, IPC is disabled 00:20:09.984 EAL: Heap on socket 0 was shrunk by 66MB 00:20:10.242 EAL: Trying to obtain current memory policy. 00:20:10.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:10.242 EAL: Restoring previous memory policy: 4 00:20:10.242 EAL: Calling mem event callback 'spdk:(nil)' 00:20:10.242 EAL: request: mp_malloc_sync 00:20:10.242 EAL: No shared files mode enabled, IPC is disabled 00:20:10.242 EAL: Heap on socket 0 was expanded by 130MB 00:20:10.500 EAL: Calling mem event callback 'spdk:(nil)' 00:20:10.500 EAL: request: mp_malloc_sync 00:20:10.500 EAL: No shared files mode enabled, IPC is disabled 00:20:10.500 EAL: Heap on socket 0 was shrunk by 130MB 00:20:10.759 EAL: Trying to obtain current memory policy. 00:20:10.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:10.759 EAL: Restoring previous memory policy: 4 00:20:10.759 EAL: Calling mem event callback 'spdk:(nil)' 00:20:10.759 EAL: request: mp_malloc_sync 00:20:10.759 EAL: No shared files mode enabled, IPC is disabled 00:20:10.759 EAL: Heap on socket 0 was expanded by 258MB 00:20:11.325 EAL: Calling mem event callback 'spdk:(nil)' 00:20:11.325 EAL: request: mp_malloc_sync 00:20:11.325 EAL: No shared files mode enabled, IPC is disabled 00:20:11.325 EAL: Heap on socket 0 was shrunk by 258MB 00:20:11.892 EAL: Trying to obtain current memory policy. 00:20:11.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:11.892 EAL: Restoring previous memory policy: 4 00:20:11.892 EAL: Calling mem event callback 'spdk:(nil)' 00:20:11.892 EAL: request: mp_malloc_sync 00:20:11.892 EAL: No shared files mode enabled, IPC is disabled 00:20:11.892 EAL: Heap on socket 0 was expanded by 514MB 00:20:12.827 EAL: Calling mem event callback 'spdk:(nil)' 00:20:13.085 EAL: request: mp_malloc_sync 00:20:13.085 EAL: No shared files mode enabled, IPC is disabled 00:20:13.085 EAL: Heap on socket 0 was shrunk by 514MB 00:20:14.120 EAL: Trying to obtain current memory policy. 00:20:14.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:14.380 EAL: Restoring previous memory policy: 4 00:20:14.380 EAL: Calling mem event callback 'spdk:(nil)' 00:20:14.380 EAL: request: mp_malloc_sync 00:20:14.380 EAL: No shared files mode enabled, IPC is disabled 00:20:14.380 EAL: Heap on socket 0 was expanded by 1026MB 00:20:16.282 EAL: Calling mem event callback 'spdk:(nil)' 00:20:16.541 EAL: request: mp_malloc_sync 00:20:16.541 EAL: No shared files mode enabled, IPC is disabled 00:20:16.541 EAL: Heap on socket 0 was shrunk by 1026MB 00:20:18.444 passed 00:20:18.444 00:20:18.444 Run Summary: Type Total Ran Passed Failed Inactive 00:20:18.444 suites 1 1 n/a 0 0 00:20:18.444 tests 2 2 2 0 0 00:20:18.444 asserts 5733 5733 5733 0 n/a 00:20:18.444 00:20:18.444 Elapsed time = 9.122 seconds 00:20:18.444 EAL: Calling mem event callback 'spdk:(nil)' 00:20:18.444 EAL: request: mp_malloc_sync 00:20:18.444 EAL: No shared files mode enabled, IPC is disabled 00:20:18.444 EAL: Heap on socket 0 was shrunk by 2MB 00:20:18.444 EAL: No shared files mode enabled, IPC is disabled 00:20:18.444 EAL: No shared files mode enabled, IPC is disabled 00:20:18.444 EAL: No shared files mode enabled, IPC is disabled 00:20:18.444 00:20:18.444 real 0m9.462s 00:20:18.444 user 0m8.054s 00:20:18.444 sys 0m1.244s 00:20:18.444 17:16:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.444 17:16:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:20:18.444 ************************************ 00:20:18.444 END TEST env_vtophys 00:20:18.444 ************************************ 00:20:18.444 17:16:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:18.444 17:16:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.444 17:16:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.444 17:16:48 env -- common/autotest_common.sh@10 -- # set +x 00:20:18.444 ************************************ 00:20:18.444 START TEST env_pci 00:20:18.444 ************************************ 00:20:18.444 17:16:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:18.444 00:20:18.444 00:20:18.444 CUnit - A unit testing framework for C - Version 2.1-3 00:20:18.444 http://cunit.sourceforge.net/ 00:20:18.444 00:20:18.444 00:20:18.444 Suite: pci 00:20:18.444 Test: pci_hook ...[2024-11-26 17:16:48.398912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56702 has claimed it 00:20:18.444 passed 00:20:18.444 00:20:18.444 EAL: Cannot find device (10000:00:01.0) 00:20:18.445 EAL: Failed to attach device on primary process 00:20:18.445 Run Summary: Type Total Ran Passed Failed Inactive 00:20:18.445 suites 1 1 n/a 0 0 00:20:18.445 tests 1 1 1 0 0 00:20:18.445 asserts 25 25 25 0 n/a 00:20:18.445 00:20:18.445 Elapsed time = 0.010 seconds 00:20:18.445 00:20:18.445 real 0m0.120s 00:20:18.445 user 0m0.050s 00:20:18.445 sys 0m0.069s 00:20:18.445 17:16:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.445 17:16:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:20:18.445 ************************************ 00:20:18.445 END TEST env_pci 00:20:18.445 ************************************ 00:20:18.445 17:16:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:20:18.445 17:16:48 env -- env/env.sh@15 -- # uname 00:20:18.445 17:16:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:20:18.445 17:16:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:20:18.445 17:16:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:18.445 17:16:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:18.445 17:16:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.445 17:16:48 env -- common/autotest_common.sh@10 -- # set +x 00:20:18.704 ************************************ 00:20:18.704 START TEST env_dpdk_post_init 00:20:18.704 ************************************ 00:20:18.704 17:16:48 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:18.704 EAL: Detected CPU lcores: 10 00:20:18.704 EAL: Detected NUMA nodes: 1 00:20:18.704 EAL: Detected shared linkage of DPDK 00:20:18.704 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:18.704 EAL: Selected IOVA mode 'PA' 00:20:18.704 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:18.704 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:20:18.704 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:20:18.964 Starting DPDK initialization... 00:20:18.964 Starting SPDK post initialization... 00:20:18.964 SPDK NVMe probe 00:20:18.964 Attaching to 0000:00:10.0 00:20:18.964 Attaching to 0000:00:11.0 00:20:18.964 Attached to 0000:00:10.0 00:20:18.964 Attached to 0000:00:11.0 00:20:18.964 Cleaning up... 00:20:18.964 00:20:18.964 real 0m0.307s 00:20:18.964 user 0m0.096s 00:20:18.964 sys 0m0.112s 00:20:18.964 17:16:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.964 17:16:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.964 ************************************ 00:20:18.964 END TEST env_dpdk_post_init 00:20:18.964 ************************************ 00:20:18.964 17:16:48 env -- env/env.sh@26 -- # uname 00:20:18.964 17:16:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:20:18.964 17:16:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:18.964 17:16:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.964 17:16:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.964 17:16:48 env -- common/autotest_common.sh@10 -- # set +x 00:20:18.964 ************************************ 00:20:18.964 START TEST env_mem_callbacks 00:20:18.964 ************************************ 00:20:18.964 17:16:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:18.964 EAL: Detected CPU lcores: 10 00:20:18.964 EAL: Detected NUMA nodes: 1 00:20:18.964 EAL: Detected shared linkage of DPDK 00:20:18.964 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:18.964 EAL: Selected IOVA mode 'PA' 00:20:19.236 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:19.236 00:20:19.236 00:20:19.236 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.236 http://cunit.sourceforge.net/ 00:20:19.236 00:20:19.236 00:20:19.236 Suite: memory 00:20:19.236 Test: test ... 00:20:19.236 register 0x200000200000 2097152 00:20:19.236 malloc 3145728 00:20:19.236 register 0x200000400000 4194304 00:20:19.236 buf 0x2000004fffc0 len 3145728 PASSED 00:20:19.236 malloc 64 00:20:19.236 buf 0x2000004ffec0 len 64 PASSED 00:20:19.236 malloc 4194304 00:20:19.236 register 0x200000800000 6291456 00:20:19.236 buf 0x2000009fffc0 len 4194304 PASSED 00:20:19.236 free 0x2000004fffc0 3145728 00:20:19.236 free 0x2000004ffec0 64 00:20:19.236 unregister 0x200000400000 4194304 PASSED 00:20:19.236 free 0x2000009fffc0 4194304 00:20:19.236 unregister 0x200000800000 6291456 PASSED 00:20:19.236 malloc 8388608 00:20:19.236 register 0x200000400000 10485760 00:20:19.236 buf 0x2000005fffc0 len 8388608 PASSED 00:20:19.236 free 0x2000005fffc0 8388608 00:20:19.236 unregister 0x200000400000 10485760 PASSED 00:20:19.236 passed 00:20:19.236 00:20:19.236 Run Summary: Type Total Ran Passed Failed Inactive 00:20:19.236 suites 1 1 n/a 0 0 00:20:19.236 tests 1 1 1 0 0 00:20:19.236 asserts 15 15 15 0 n/a 00:20:19.236 00:20:19.236 Elapsed time = 0.083 seconds 00:20:19.236 00:20:19.236 real 0m0.315s 00:20:19.236 user 0m0.117s 00:20:19.236 sys 0m0.096s 00:20:19.236 17:16:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.236 17:16:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:20:19.236 ************************************ 00:20:19.236 END TEST env_mem_callbacks 00:20:19.236 ************************************ 00:20:19.236 ************************************ 00:20:19.236 END TEST env 00:20:19.236 ************************************ 00:20:19.236 00:20:19.236 real 0m11.134s 00:20:19.236 user 0m8.840s 00:20:19.236 sys 0m1.915s 00:20:19.236 17:16:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.236 17:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:20:19.494 17:16:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:19.494 17:16:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:19.494 17:16:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.494 17:16:49 -- common/autotest_common.sh@10 -- # set +x 00:20:19.494 ************************************ 00:20:19.494 START TEST rpc 00:20:19.494 ************************************ 00:20:19.494 17:16:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:19.494 * Looking for test storage... 00:20:19.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:19.494 17:16:49 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:19.494 17:16:49 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:19.494 17:16:49 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:19.752 17:16:49 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.752 17:16:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.752 17:16:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.752 17:16:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.752 17:16:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.752 17:16:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.752 17:16:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:19.752 17:16:49 rpc -- scripts/common.sh@345 -- # : 1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.752 17:16:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.752 17:16:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@353 -- # local d=1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.752 17:16:49 rpc -- scripts/common.sh@355 -- # echo 1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.752 17:16:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@353 -- # local d=2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.752 17:16:49 rpc -- scripts/common.sh@355 -- # echo 2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.752 17:16:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.752 17:16:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.752 17:16:49 rpc -- scripts/common.sh@368 -- # return 0 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:19.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.753 --rc genhtml_branch_coverage=1 00:20:19.753 --rc genhtml_function_coverage=1 00:20:19.753 --rc genhtml_legend=1 00:20:19.753 --rc geninfo_all_blocks=1 00:20:19.753 --rc geninfo_unexecuted_blocks=1 00:20:19.753 00:20:19.753 ' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:19.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.753 --rc genhtml_branch_coverage=1 00:20:19.753 --rc genhtml_function_coverage=1 00:20:19.753 --rc genhtml_legend=1 00:20:19.753 --rc geninfo_all_blocks=1 00:20:19.753 --rc geninfo_unexecuted_blocks=1 00:20:19.753 00:20:19.753 ' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:19.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.753 --rc genhtml_branch_coverage=1 00:20:19.753 --rc genhtml_function_coverage=1 00:20:19.753 --rc genhtml_legend=1 00:20:19.753 --rc geninfo_all_blocks=1 00:20:19.753 --rc geninfo_unexecuted_blocks=1 00:20:19.753 00:20:19.753 ' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:19.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.753 --rc genhtml_branch_coverage=1 00:20:19.753 --rc genhtml_function_coverage=1 00:20:19.753 --rc genhtml_legend=1 00:20:19.753 --rc geninfo_all_blocks=1 00:20:19.753 --rc geninfo_unexecuted_blocks=1 00:20:19.753 00:20:19.753 ' 00:20:19.753 17:16:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56829 00:20:19.753 17:16:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:19.753 17:16:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:20:19.753 17:16:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56829 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 56829 ']' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.753 17:16:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:19.753 [2024-11-26 17:16:49.768810] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:19.753 [2024-11-26 17:16:49.769024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56829 ] 00:20:20.010 [2024-11-26 17:16:49.956073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.268 [2024-11-26 17:16:50.130785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:20:20.268 [2024-11-26 17:16:50.130867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56829' to capture a snapshot of events at runtime. 00:20:20.268 [2024-11-26 17:16:50.130883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.268 [2024-11-26 17:16:50.130898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.268 [2024-11-26 17:16:50.130910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56829 for offline analysis/debug. 00:20:20.268 [2024-11-26 17:16:50.132400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.201 17:16:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.201 17:16:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:20:21.201 17:16:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:21.201 17:16:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:21.201 17:16:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:20:21.201 17:16:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:20:21.201 17:16:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.201 17:16:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.201 17:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 ************************************ 00:20:21.201 START TEST rpc_integrity 00:20:21.201 ************************************ 00:20:21.201 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:21.201 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:21.201 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.201 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.201 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:21.201 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:21.460 { 00:20:21.460 "name": "Malloc0", 00:20:21.460 "aliases": [ 00:20:21.460 "f15a9f1f-fd25-4bc8-8fcb-2ed3c97c899b" 00:20:21.460 ], 00:20:21.460 "product_name": "Malloc disk", 00:20:21.460 "block_size": 512, 00:20:21.460 "num_blocks": 16384, 00:20:21.460 "uuid": "f15a9f1f-fd25-4bc8-8fcb-2ed3c97c899b", 00:20:21.460 "assigned_rate_limits": { 00:20:21.460 "rw_ios_per_sec": 0, 00:20:21.460 "rw_mbytes_per_sec": 0, 00:20:21.460 "r_mbytes_per_sec": 0, 00:20:21.460 "w_mbytes_per_sec": 0 00:20:21.460 }, 00:20:21.460 "claimed": false, 00:20:21.460 "zoned": false, 00:20:21.460 "supported_io_types": { 00:20:21.460 "read": true, 00:20:21.460 "write": true, 00:20:21.460 "unmap": true, 00:20:21.460 "flush": true, 00:20:21.460 "reset": true, 00:20:21.460 "nvme_admin": false, 00:20:21.460 "nvme_io": false, 00:20:21.460 "nvme_io_md": false, 00:20:21.460 "write_zeroes": true, 00:20:21.460 "zcopy": true, 00:20:21.460 "get_zone_info": false, 00:20:21.460 "zone_management": false, 00:20:21.460 "zone_append": false, 00:20:21.460 "compare": false, 00:20:21.460 "compare_and_write": false, 00:20:21.460 "abort": true, 00:20:21.460 "seek_hole": false, 00:20:21.460 "seek_data": false, 00:20:21.460 "copy": true, 00:20:21.460 "nvme_iov_md": false 00:20:21.460 }, 00:20:21.460 "memory_domains": [ 00:20:21.460 { 00:20:21.460 "dma_device_id": "system", 00:20:21.460 "dma_device_type": 1 00:20:21.460 }, 00:20:21.460 { 00:20:21.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.460 "dma_device_type": 2 00:20:21.460 } 00:20:21.460 ], 00:20:21.460 "driver_specific": {} 00:20:21.460 } 00:20:21.460 ]' 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 [2024-11-26 17:16:51.438748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:20:21.460 [2024-11-26 17:16:51.438848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.460 [2024-11-26 17:16:51.438891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:21.460 [2024-11-26 17:16:51.438914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.460 [2024-11-26 17:16:51.442061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.460 [2024-11-26 17:16:51.442122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:21.460 Passthru0 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.460 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:21.460 { 00:20:21.460 "name": "Malloc0", 00:20:21.460 "aliases": [ 00:20:21.460 "f15a9f1f-fd25-4bc8-8fcb-2ed3c97c899b" 00:20:21.460 ], 00:20:21.460 "product_name": "Malloc disk", 00:20:21.460 "block_size": 512, 00:20:21.460 "num_blocks": 16384, 00:20:21.460 "uuid": "f15a9f1f-fd25-4bc8-8fcb-2ed3c97c899b", 00:20:21.460 "assigned_rate_limits": { 00:20:21.460 "rw_ios_per_sec": 0, 00:20:21.460 "rw_mbytes_per_sec": 0, 00:20:21.460 "r_mbytes_per_sec": 0, 00:20:21.460 "w_mbytes_per_sec": 0 00:20:21.460 }, 00:20:21.460 "claimed": true, 00:20:21.460 "claim_type": "exclusive_write", 00:20:21.460 "zoned": false, 00:20:21.460 "supported_io_types": { 00:20:21.460 "read": true, 00:20:21.460 "write": true, 00:20:21.460 "unmap": true, 00:20:21.460 "flush": true, 00:20:21.460 "reset": true, 00:20:21.460 "nvme_admin": false, 00:20:21.460 "nvme_io": false, 00:20:21.460 "nvme_io_md": false, 00:20:21.460 "write_zeroes": true, 00:20:21.460 "zcopy": true, 00:20:21.460 "get_zone_info": false, 00:20:21.460 "zone_management": false, 00:20:21.460 "zone_append": false, 00:20:21.460 "compare": false, 00:20:21.460 "compare_and_write": false, 00:20:21.460 "abort": true, 00:20:21.460 "seek_hole": false, 00:20:21.460 "seek_data": false, 00:20:21.460 "copy": true, 00:20:21.460 "nvme_iov_md": false 00:20:21.460 }, 00:20:21.460 "memory_domains": [ 00:20:21.460 { 00:20:21.460 "dma_device_id": "system", 00:20:21.460 "dma_device_type": 1 00:20:21.460 }, 00:20:21.460 { 00:20:21.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.460 "dma_device_type": 2 00:20:21.460 } 00:20:21.460 ], 00:20:21.460 "driver_specific": {} 00:20:21.460 }, 00:20:21.460 { 00:20:21.460 "name": "Passthru0", 00:20:21.460 "aliases": [ 00:20:21.460 "571e78ef-1bc5-5856-b41d-552457588f7c" 00:20:21.460 ], 00:20:21.460 "product_name": "passthru", 00:20:21.460 "block_size": 512, 00:20:21.460 "num_blocks": 16384, 00:20:21.460 "uuid": "571e78ef-1bc5-5856-b41d-552457588f7c", 00:20:21.460 "assigned_rate_limits": { 00:20:21.460 "rw_ios_per_sec": 0, 00:20:21.460 "rw_mbytes_per_sec": 0, 00:20:21.460 "r_mbytes_per_sec": 0, 00:20:21.460 "w_mbytes_per_sec": 0 00:20:21.460 }, 00:20:21.460 "claimed": false, 00:20:21.460 "zoned": false, 00:20:21.460 "supported_io_types": { 00:20:21.460 "read": true, 00:20:21.460 "write": true, 00:20:21.460 "unmap": true, 00:20:21.460 "flush": true, 00:20:21.460 "reset": true, 00:20:21.460 "nvme_admin": false, 00:20:21.460 "nvme_io": false, 00:20:21.460 "nvme_io_md": false, 00:20:21.460 "write_zeroes": true, 00:20:21.460 "zcopy": true, 00:20:21.460 "get_zone_info": false, 00:20:21.461 "zone_management": false, 00:20:21.461 "zone_append": false, 00:20:21.461 "compare": false, 00:20:21.461 "compare_and_write": false, 00:20:21.461 "abort": true, 00:20:21.461 "seek_hole": false, 00:20:21.461 "seek_data": false, 00:20:21.461 "copy": true, 00:20:21.461 "nvme_iov_md": false 00:20:21.461 }, 00:20:21.461 "memory_domains": [ 00:20:21.461 { 00:20:21.461 "dma_device_id": "system", 00:20:21.461 "dma_device_type": 1 00:20:21.461 }, 00:20:21.461 { 00:20:21.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.461 "dma_device_type": 2 00:20:21.461 } 00:20:21.461 ], 00:20:21.461 "driver_specific": { 00:20:21.461 "passthru": { 00:20:21.461 "name": "Passthru0", 00:20:21.461 "base_bdev_name": "Malloc0" 00:20:21.461 } 00:20:21.461 } 00:20:21.461 } 00:20:21.461 ]' 00:20:21.461 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:21.461 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:21.461 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:21.461 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.461 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.461 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.461 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:21.461 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.461 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.741 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.741 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:21.741 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.741 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.741 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:21.742 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:21.742 17:16:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:21.742 00:20:21.742 real 0m0.397s 00:20:21.742 user 0m0.209s 00:20:21.742 sys 0m0.071s 00:20:21.742 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.742 17:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 ************************************ 00:20:21.742 END TEST rpc_integrity 00:20:21.742 ************************************ 00:20:21.742 17:16:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:20:21.742 17:16:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.742 17:16:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.742 17:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 ************************************ 00:20:21.742 START TEST rpc_plugins 00:20:21.742 ************************************ 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:20:21.742 { 00:20:21.742 "name": "Malloc1", 00:20:21.742 "aliases": [ 00:20:21.742 "3a3e8409-6e3f-49df-a208-8dedb9f95bea" 00:20:21.742 ], 00:20:21.742 "product_name": "Malloc disk", 00:20:21.742 "block_size": 4096, 00:20:21.742 "num_blocks": 256, 00:20:21.742 "uuid": "3a3e8409-6e3f-49df-a208-8dedb9f95bea", 00:20:21.742 "assigned_rate_limits": { 00:20:21.742 "rw_ios_per_sec": 0, 00:20:21.742 "rw_mbytes_per_sec": 0, 00:20:21.742 "r_mbytes_per_sec": 0, 00:20:21.742 "w_mbytes_per_sec": 0 00:20:21.742 }, 00:20:21.742 "claimed": false, 00:20:21.742 "zoned": false, 00:20:21.742 "supported_io_types": { 00:20:21.742 "read": true, 00:20:21.742 "write": true, 00:20:21.742 "unmap": true, 00:20:21.742 "flush": true, 00:20:21.742 "reset": true, 00:20:21.742 "nvme_admin": false, 00:20:21.742 "nvme_io": false, 00:20:21.742 "nvme_io_md": false, 00:20:21.742 "write_zeroes": true, 00:20:21.742 "zcopy": true, 00:20:21.742 "get_zone_info": false, 00:20:21.742 "zone_management": false, 00:20:21.742 "zone_append": false, 00:20:21.742 "compare": false, 00:20:21.742 "compare_and_write": false, 00:20:21.742 "abort": true, 00:20:21.742 "seek_hole": false, 00:20:21.742 "seek_data": false, 00:20:21.742 "copy": true, 00:20:21.742 "nvme_iov_md": false 00:20:21.742 }, 00:20:21.742 "memory_domains": [ 00:20:21.742 { 00:20:21.742 "dma_device_id": "system", 00:20:21.742 "dma_device_type": 1 00:20:21.742 }, 00:20:21.742 { 00:20:21.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.742 "dma_device_type": 2 00:20:21.742 } 00:20:21.742 ], 00:20:21.742 "driver_specific": {} 00:20:21.742 } 00:20:21.742 ]' 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.742 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:22.000 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.000 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:20:22.000 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:20:22.000 17:16:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:20:22.000 00:20:22.000 real 0m0.178s 00:20:22.000 user 0m0.101s 00:20:22.000 sys 0m0.028s 00:20:22.000 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.000 ************************************ 00:20:22.000 END TEST rpc_plugins 00:20:22.000 ************************************ 00:20:22.000 17:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:22.000 17:16:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:20:22.000 17:16:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.000 17:16:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.000 17:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.000 ************************************ 00:20:22.000 START TEST rpc_trace_cmd_test 00:20:22.000 ************************************ 00:20:22.000 17:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:20:22.000 17:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:20:22.000 17:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:20:22.000 17:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.000 17:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.000 17:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.000 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:20:22.000 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56829", 00:20:22.000 "tpoint_group_mask": "0x8", 00:20:22.000 "iscsi_conn": { 00:20:22.000 "mask": "0x2", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "scsi": { 00:20:22.000 "mask": "0x4", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "bdev": { 00:20:22.000 "mask": "0x8", 00:20:22.000 "tpoint_mask": "0xffffffffffffffff" 00:20:22.000 }, 00:20:22.000 "nvmf_rdma": { 00:20:22.000 "mask": "0x10", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "nvmf_tcp": { 00:20:22.000 "mask": "0x20", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "ftl": { 00:20:22.000 "mask": "0x40", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "blobfs": { 00:20:22.000 "mask": "0x80", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "dsa": { 00:20:22.000 "mask": "0x200", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "thread": { 00:20:22.000 "mask": "0x400", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "nvme_pcie": { 00:20:22.000 "mask": "0x800", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "iaa": { 00:20:22.000 "mask": "0x1000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "nvme_tcp": { 00:20:22.000 "mask": "0x2000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "bdev_nvme": { 00:20:22.000 "mask": "0x4000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "sock": { 00:20:22.000 "mask": "0x8000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "blob": { 00:20:22.000 "mask": "0x10000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "bdev_raid": { 00:20:22.000 "mask": "0x20000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 }, 00:20:22.000 "scheduler": { 00:20:22.000 "mask": "0x40000", 00:20:22.000 "tpoint_mask": "0x0" 00:20:22.000 } 00:20:22.000 }' 00:20:22.000 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:20:22.000 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:20:22.000 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:20:22.259 00:20:22.259 real 0m0.247s 00:20:22.259 user 0m0.191s 00:20:22.259 sys 0m0.046s 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.259 17:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.259 ************************************ 00:20:22.259 END TEST rpc_trace_cmd_test 00:20:22.259 ************************************ 00:20:22.259 17:16:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:20:22.259 17:16:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:20:22.259 17:16:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:20:22.259 17:16:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.259 17:16:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.259 17:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.259 ************************************ 00:20:22.259 START TEST rpc_daemon_integrity 00:20:22.259 ************************************ 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.259 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:22.518 { 00:20:22.518 "name": "Malloc2", 00:20:22.518 "aliases": [ 00:20:22.518 "a3130928-3cb2-43f4-b0ef-f6182e4ae504" 00:20:22.518 ], 00:20:22.518 "product_name": "Malloc disk", 00:20:22.518 "block_size": 512, 00:20:22.518 "num_blocks": 16384, 00:20:22.518 "uuid": "a3130928-3cb2-43f4-b0ef-f6182e4ae504", 00:20:22.518 "assigned_rate_limits": { 00:20:22.518 "rw_ios_per_sec": 0, 00:20:22.518 "rw_mbytes_per_sec": 0, 00:20:22.518 "r_mbytes_per_sec": 0, 00:20:22.518 "w_mbytes_per_sec": 0 00:20:22.518 }, 00:20:22.518 "claimed": false, 00:20:22.518 "zoned": false, 00:20:22.518 "supported_io_types": { 00:20:22.518 "read": true, 00:20:22.518 "write": true, 00:20:22.518 "unmap": true, 00:20:22.518 "flush": true, 00:20:22.518 "reset": true, 00:20:22.518 "nvme_admin": false, 00:20:22.518 "nvme_io": false, 00:20:22.518 "nvme_io_md": false, 00:20:22.518 "write_zeroes": true, 00:20:22.518 "zcopy": true, 00:20:22.518 "get_zone_info": false, 00:20:22.518 "zone_management": false, 00:20:22.518 "zone_append": false, 00:20:22.518 "compare": false, 00:20:22.518 "compare_and_write": false, 00:20:22.518 "abort": true, 00:20:22.518 "seek_hole": false, 00:20:22.518 "seek_data": false, 00:20:22.518 "copy": true, 00:20:22.518 "nvme_iov_md": false 00:20:22.518 }, 00:20:22.518 "memory_domains": [ 00:20:22.518 { 00:20:22.518 "dma_device_id": "system", 00:20:22.518 "dma_device_type": 1 00:20:22.518 }, 00:20:22.518 { 00:20:22.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.518 "dma_device_type": 2 00:20:22.518 } 00:20:22.518 ], 00:20:22.518 "driver_specific": {} 00:20:22.518 } 00:20:22.518 ]' 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.518 [2024-11-26 17:16:52.447141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:20:22.518 [2024-11-26 17:16:52.447236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.518 [2024-11-26 17:16:52.447266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:22.518 [2024-11-26 17:16:52.447285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.518 [2024-11-26 17:16:52.450463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.518 [2024-11-26 17:16:52.450535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:22.518 Passthru0 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:22.518 { 00:20:22.518 "name": "Malloc2", 00:20:22.518 "aliases": [ 00:20:22.518 "a3130928-3cb2-43f4-b0ef-f6182e4ae504" 00:20:22.518 ], 00:20:22.518 "product_name": "Malloc disk", 00:20:22.518 "block_size": 512, 00:20:22.518 "num_blocks": 16384, 00:20:22.518 "uuid": "a3130928-3cb2-43f4-b0ef-f6182e4ae504", 00:20:22.518 "assigned_rate_limits": { 00:20:22.518 "rw_ios_per_sec": 0, 00:20:22.518 "rw_mbytes_per_sec": 0, 00:20:22.518 "r_mbytes_per_sec": 0, 00:20:22.518 "w_mbytes_per_sec": 0 00:20:22.518 }, 00:20:22.518 "claimed": true, 00:20:22.518 "claim_type": "exclusive_write", 00:20:22.518 "zoned": false, 00:20:22.518 "supported_io_types": { 00:20:22.518 "read": true, 00:20:22.518 "write": true, 00:20:22.518 "unmap": true, 00:20:22.518 "flush": true, 00:20:22.518 "reset": true, 00:20:22.518 "nvme_admin": false, 00:20:22.518 "nvme_io": false, 00:20:22.518 "nvme_io_md": false, 00:20:22.518 "write_zeroes": true, 00:20:22.518 "zcopy": true, 00:20:22.518 "get_zone_info": false, 00:20:22.518 "zone_management": false, 00:20:22.518 "zone_append": false, 00:20:22.518 "compare": false, 00:20:22.518 "compare_and_write": false, 00:20:22.518 "abort": true, 00:20:22.518 "seek_hole": false, 00:20:22.518 "seek_data": false, 00:20:22.518 "copy": true, 00:20:22.518 "nvme_iov_md": false 00:20:22.518 }, 00:20:22.518 "memory_domains": [ 00:20:22.518 { 00:20:22.518 "dma_device_id": "system", 00:20:22.518 "dma_device_type": 1 00:20:22.518 }, 00:20:22.518 { 00:20:22.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.518 "dma_device_type": 2 00:20:22.518 } 00:20:22.518 ], 00:20:22.518 "driver_specific": {} 00:20:22.518 }, 00:20:22.518 { 00:20:22.518 "name": "Passthru0", 00:20:22.518 "aliases": [ 00:20:22.518 "617c695a-3cd5-54a6-bbe8-57d0c1b0deba" 00:20:22.518 ], 00:20:22.518 "product_name": "passthru", 00:20:22.518 "block_size": 512, 00:20:22.518 "num_blocks": 16384, 00:20:22.518 "uuid": "617c695a-3cd5-54a6-bbe8-57d0c1b0deba", 00:20:22.518 "assigned_rate_limits": { 00:20:22.518 "rw_ios_per_sec": 0, 00:20:22.518 "rw_mbytes_per_sec": 0, 00:20:22.518 "r_mbytes_per_sec": 0, 00:20:22.518 "w_mbytes_per_sec": 0 00:20:22.518 }, 00:20:22.518 "claimed": false, 00:20:22.518 "zoned": false, 00:20:22.518 "supported_io_types": { 00:20:22.518 "read": true, 00:20:22.518 "write": true, 00:20:22.518 "unmap": true, 00:20:22.518 "flush": true, 00:20:22.518 "reset": true, 00:20:22.518 "nvme_admin": false, 00:20:22.518 "nvme_io": false, 00:20:22.518 "nvme_io_md": false, 00:20:22.518 "write_zeroes": true, 00:20:22.518 "zcopy": true, 00:20:22.518 "get_zone_info": false, 00:20:22.518 "zone_management": false, 00:20:22.518 "zone_append": false, 00:20:22.518 "compare": false, 00:20:22.518 "compare_and_write": false, 00:20:22.518 "abort": true, 00:20:22.518 "seek_hole": false, 00:20:22.518 "seek_data": false, 00:20:22.518 "copy": true, 00:20:22.518 "nvme_iov_md": false 00:20:22.518 }, 00:20:22.518 "memory_domains": [ 00:20:22.518 { 00:20:22.518 "dma_device_id": "system", 00:20:22.518 "dma_device_type": 1 00:20:22.518 }, 00:20:22.518 { 00:20:22.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.518 "dma_device_type": 2 00:20:22.518 } 00:20:22.518 ], 00:20:22.518 "driver_specific": { 00:20:22.518 "passthru": { 00:20:22.518 "name": "Passthru0", 00:20:22.518 "base_bdev_name": "Malloc2" 00:20:22.518 } 00:20:22.518 } 00:20:22.518 } 00:20:22.518 ]' 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:22.518 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:22.519 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:22.777 17:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:22.777 00:20:22.777 real 0m0.337s 00:20:22.777 user 0m0.168s 00:20:22.777 sys 0m0.059s 00:20:22.777 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.777 17:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:22.777 ************************************ 00:20:22.777 END TEST rpc_daemon_integrity 00:20:22.777 ************************************ 00:20:22.777 17:16:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:22.777 17:16:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56829 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 56829 ']' 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@958 -- # kill -0 56829 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@959 -- # uname 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56829 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.777 killing process with pid 56829 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56829' 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@973 -- # kill 56829 00:20:22.777 17:16:52 rpc -- common/autotest_common.sh@978 -- # wait 56829 00:20:25.308 00:20:25.308 real 0m5.980s 00:20:25.308 user 0m6.300s 00:20:25.308 sys 0m1.251s 00:20:25.308 17:16:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.308 17:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.308 ************************************ 00:20:25.308 END TEST rpc 00:20:25.308 ************************************ 00:20:25.566 17:16:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:25.566 17:16:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.566 17:16:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.566 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:25.566 ************************************ 00:20:25.566 START TEST skip_rpc 00:20:25.566 ************************************ 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:25.566 * Looking for test storage... 00:20:25.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.566 17:16:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.566 --rc genhtml_branch_coverage=1 00:20:25.566 --rc genhtml_function_coverage=1 00:20:25.566 --rc genhtml_legend=1 00:20:25.566 --rc geninfo_all_blocks=1 00:20:25.566 --rc geninfo_unexecuted_blocks=1 00:20:25.566 00:20:25.566 ' 00:20:25.566 17:16:55 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.566 --rc genhtml_branch_coverage=1 00:20:25.566 --rc genhtml_function_coverage=1 00:20:25.566 --rc genhtml_legend=1 00:20:25.566 --rc geninfo_all_blocks=1 00:20:25.566 --rc geninfo_unexecuted_blocks=1 00:20:25.566 00:20:25.566 ' 00:20:25.825 17:16:55 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.825 --rc genhtml_branch_coverage=1 00:20:25.825 --rc genhtml_function_coverage=1 00:20:25.825 --rc genhtml_legend=1 00:20:25.825 --rc geninfo_all_blocks=1 00:20:25.825 --rc geninfo_unexecuted_blocks=1 00:20:25.825 00:20:25.825 ' 00:20:25.825 17:16:55 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.825 --rc genhtml_branch_coverage=1 00:20:25.825 --rc genhtml_function_coverage=1 00:20:25.825 --rc genhtml_legend=1 00:20:25.825 --rc geninfo_all_blocks=1 00:20:25.825 --rc geninfo_unexecuted_blocks=1 00:20:25.825 00:20:25.825 ' 00:20:25.825 17:16:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:25.825 17:16:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:25.825 17:16:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:20:25.825 17:16:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.825 17:16:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.825 17:16:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.825 ************************************ 00:20:25.825 START TEST skip_rpc 00:20:25.825 ************************************ 00:20:25.825 17:16:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:20:25.825 17:16:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57069 00:20:25.825 17:16:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:25.825 17:16:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:20:25.825 17:16:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:20:25.825 [2024-11-26 17:16:55.821394] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:25.825 [2024-11-26 17:16:55.821561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57069 ] 00:20:26.083 [2024-11-26 17:16:55.997809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.083 [2024-11-26 17:16:56.171360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57069 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57069 ']' 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57069 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57069 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57069' 00:20:31.386 killing process with pid 57069 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57069 00:20:31.386 17:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57069 00:20:33.917 00:20:33.917 real 0m7.854s 00:20:33.917 user 0m7.185s 00:20:33.917 sys 0m0.579s 00:20:33.917 17:17:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.917 17:17:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:33.917 ************************************ 00:20:33.917 END TEST skip_rpc 00:20:33.917 ************************************ 00:20:33.917 17:17:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:20:33.917 17:17:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:33.917 17:17:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.917 17:17:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:33.917 ************************************ 00:20:33.917 START TEST skip_rpc_with_json 00:20:33.917 ************************************ 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57184 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57184 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57184 ']' 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.917 17:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:33.917 [2024-11-26 17:17:03.741158] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:33.917 [2024-11-26 17:17:03.741364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57184 ] 00:20:33.917 [2024-11-26 17:17:03.926801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.194 [2024-11-26 17:17:04.080400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.128 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.128 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:35.129 [2024-11-26 17:17:05.194976] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:20:35.129 request: 00:20:35.129 { 00:20:35.129 "trtype": "tcp", 00:20:35.129 "method": "nvmf_get_transports", 00:20:35.129 "req_id": 1 00:20:35.129 } 00:20:35.129 Got JSON-RPC error response 00:20:35.129 response: 00:20:35.129 { 00:20:35.129 "code": -19, 00:20:35.129 "message": "No such device" 00:20:35.129 } 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:35.129 [2024-11-26 17:17:05.211162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.129 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:35.389 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.389 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:35.389 { 00:20:35.389 "subsystems": [ 00:20:35.389 { 00:20:35.389 "subsystem": "fsdev", 00:20:35.389 "config": [ 00:20:35.389 { 00:20:35.389 "method": "fsdev_set_opts", 00:20:35.389 "params": { 00:20:35.389 "fsdev_io_pool_size": 65535, 00:20:35.389 "fsdev_io_cache_size": 256 00:20:35.389 } 00:20:35.389 } 00:20:35.389 ] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "keyring", 00:20:35.389 "config": [] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "iobuf", 00:20:35.389 "config": [ 00:20:35.389 { 00:20:35.389 "method": "iobuf_set_options", 00:20:35.389 "params": { 00:20:35.389 "small_pool_count": 8192, 00:20:35.389 "large_pool_count": 1024, 00:20:35.389 "small_bufsize": 8192, 00:20:35.389 "large_bufsize": 135168, 00:20:35.389 "enable_numa": false 00:20:35.389 } 00:20:35.389 } 00:20:35.389 ] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "sock", 00:20:35.389 "config": [ 00:20:35.389 { 00:20:35.389 "method": "sock_set_default_impl", 00:20:35.389 "params": { 00:20:35.389 "impl_name": "posix" 00:20:35.389 } 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "method": "sock_impl_set_options", 00:20:35.389 "params": { 00:20:35.389 "impl_name": "ssl", 00:20:35.389 "recv_buf_size": 4096, 00:20:35.389 "send_buf_size": 4096, 00:20:35.389 "enable_recv_pipe": true, 00:20:35.389 "enable_quickack": false, 00:20:35.389 "enable_placement_id": 0, 00:20:35.389 "enable_zerocopy_send_server": true, 00:20:35.389 "enable_zerocopy_send_client": false, 00:20:35.389 "zerocopy_threshold": 0, 00:20:35.389 "tls_version": 0, 00:20:35.389 "enable_ktls": false 00:20:35.389 } 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "method": "sock_impl_set_options", 00:20:35.389 "params": { 00:20:35.389 "impl_name": "posix", 00:20:35.389 "recv_buf_size": 2097152, 00:20:35.389 "send_buf_size": 2097152, 00:20:35.389 "enable_recv_pipe": true, 00:20:35.389 "enable_quickack": false, 00:20:35.389 "enable_placement_id": 0, 00:20:35.389 "enable_zerocopy_send_server": true, 00:20:35.389 "enable_zerocopy_send_client": false, 00:20:35.389 "zerocopy_threshold": 0, 00:20:35.389 "tls_version": 0, 00:20:35.389 "enable_ktls": false 00:20:35.389 } 00:20:35.389 } 00:20:35.389 ] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "vmd", 00:20:35.389 "config": [] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "accel", 00:20:35.389 "config": [ 00:20:35.389 { 00:20:35.389 "method": "accel_set_options", 00:20:35.389 "params": { 00:20:35.389 "small_cache_size": 128, 00:20:35.389 "large_cache_size": 16, 00:20:35.389 "task_count": 2048, 00:20:35.389 "sequence_count": 2048, 00:20:35.389 "buf_count": 2048 00:20:35.389 } 00:20:35.389 } 00:20:35.389 ] 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "subsystem": "bdev", 00:20:35.389 "config": [ 00:20:35.389 { 00:20:35.389 "method": "bdev_set_options", 00:20:35.389 "params": { 00:20:35.389 "bdev_io_pool_size": 65535, 00:20:35.389 "bdev_io_cache_size": 256, 00:20:35.389 "bdev_auto_examine": true, 00:20:35.389 "iobuf_small_cache_size": 128, 00:20:35.389 "iobuf_large_cache_size": 16 00:20:35.389 } 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "method": "bdev_raid_set_options", 00:20:35.389 "params": { 00:20:35.389 "process_window_size_kb": 1024, 00:20:35.389 "process_max_bandwidth_mb_sec": 0 00:20:35.389 } 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "method": "bdev_iscsi_set_options", 00:20:35.389 "params": { 00:20:35.389 "timeout_sec": 30 00:20:35.389 } 00:20:35.389 }, 00:20:35.389 { 00:20:35.389 "method": "bdev_nvme_set_options", 00:20:35.389 "params": { 00:20:35.389 "action_on_timeout": "none", 00:20:35.389 "timeout_us": 0, 00:20:35.389 "timeout_admin_us": 0, 00:20:35.389 "keep_alive_timeout_ms": 10000, 00:20:35.389 "arbitration_burst": 0, 00:20:35.389 "low_priority_weight": 0, 00:20:35.389 "medium_priority_weight": 0, 00:20:35.389 "high_priority_weight": 0, 00:20:35.389 "nvme_adminq_poll_period_us": 10000, 00:20:35.389 "nvme_ioq_poll_period_us": 0, 00:20:35.389 "io_queue_requests": 0, 00:20:35.389 "delay_cmd_submit": true, 00:20:35.389 "transport_retry_count": 4, 00:20:35.389 "bdev_retry_count": 3, 00:20:35.389 "transport_ack_timeout": 0, 00:20:35.389 "ctrlr_loss_timeout_sec": 0, 00:20:35.389 "reconnect_delay_sec": 0, 00:20:35.389 "fast_io_fail_timeout_sec": 0, 00:20:35.389 "disable_auto_failback": false, 00:20:35.389 "generate_uuids": false, 00:20:35.389 "transport_tos": 0, 00:20:35.389 "nvme_error_stat": false, 00:20:35.389 "rdma_srq_size": 0, 00:20:35.390 "io_path_stat": false, 00:20:35.390 "allow_accel_sequence": false, 00:20:35.390 "rdma_max_cq_size": 0, 00:20:35.390 "rdma_cm_event_timeout_ms": 0, 00:20:35.390 "dhchap_digests": [ 00:20:35.390 "sha256", 00:20:35.390 "sha384", 00:20:35.390 "sha512" 00:20:35.390 ], 00:20:35.390 "dhchap_dhgroups": [ 00:20:35.390 "null", 00:20:35.390 "ffdhe2048", 00:20:35.390 "ffdhe3072", 00:20:35.390 "ffdhe4096", 00:20:35.390 "ffdhe6144", 00:20:35.390 "ffdhe8192" 00:20:35.390 ] 00:20:35.390 } 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "method": "bdev_nvme_set_hotplug", 00:20:35.390 "params": { 00:20:35.390 "period_us": 100000, 00:20:35.390 "enable": false 00:20:35.390 } 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "method": "bdev_wait_for_examine" 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "scsi", 00:20:35.390 "config": null 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "scheduler", 00:20:35.390 "config": [ 00:20:35.390 { 00:20:35.390 "method": "framework_set_scheduler", 00:20:35.390 "params": { 00:20:35.390 "name": "static" 00:20:35.390 } 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "vhost_scsi", 00:20:35.390 "config": [] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "vhost_blk", 00:20:35.390 "config": [] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "ublk", 00:20:35.390 "config": [] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "nbd", 00:20:35.390 "config": [] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "nvmf", 00:20:35.390 "config": [ 00:20:35.390 { 00:20:35.390 "method": "nvmf_set_config", 00:20:35.390 "params": { 00:20:35.390 "discovery_filter": "match_any", 00:20:35.390 "admin_cmd_passthru": { 00:20:35.390 "identify_ctrlr": false 00:20:35.390 }, 00:20:35.390 "dhchap_digests": [ 00:20:35.390 "sha256", 00:20:35.390 "sha384", 00:20:35.390 "sha512" 00:20:35.390 ], 00:20:35.390 "dhchap_dhgroups": [ 00:20:35.390 "null", 00:20:35.390 "ffdhe2048", 00:20:35.390 "ffdhe3072", 00:20:35.390 "ffdhe4096", 00:20:35.390 "ffdhe6144", 00:20:35.390 "ffdhe8192" 00:20:35.390 ] 00:20:35.390 } 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "method": "nvmf_set_max_subsystems", 00:20:35.390 "params": { 00:20:35.390 "max_subsystems": 1024 00:20:35.390 } 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "method": "nvmf_set_crdt", 00:20:35.390 "params": { 00:20:35.390 "crdt1": 0, 00:20:35.390 "crdt2": 0, 00:20:35.390 "crdt3": 0 00:20:35.390 } 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "method": "nvmf_create_transport", 00:20:35.390 "params": { 00:20:35.390 "trtype": "TCP", 00:20:35.390 "max_queue_depth": 128, 00:20:35.390 "max_io_qpairs_per_ctrlr": 127, 00:20:35.390 "in_capsule_data_size": 4096, 00:20:35.390 "max_io_size": 131072, 00:20:35.390 "io_unit_size": 131072, 00:20:35.390 "max_aq_depth": 128, 00:20:35.390 "num_shared_buffers": 511, 00:20:35.390 "buf_cache_size": 4294967295, 00:20:35.390 "dif_insert_or_strip": false, 00:20:35.390 "zcopy": false, 00:20:35.390 "c2h_success": true, 00:20:35.390 "sock_priority": 0, 00:20:35.390 "abort_timeout_sec": 1, 00:20:35.390 "ack_timeout": 0, 00:20:35.390 "data_wr_pool_size": 0 00:20:35.390 } 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "subsystem": "iscsi", 00:20:35.390 "config": [ 00:20:35.390 { 00:20:35.390 "method": "iscsi_set_options", 00:20:35.390 "params": { 00:20:35.390 "node_base": "iqn.2016-06.io.spdk", 00:20:35.390 "max_sessions": 128, 00:20:35.390 "max_connections_per_session": 2, 00:20:35.390 "max_queue_depth": 64, 00:20:35.390 "default_time2wait": 2, 00:20:35.390 "default_time2retain": 20, 00:20:35.390 "first_burst_length": 8192, 00:20:35.390 "immediate_data": true, 00:20:35.390 "allow_duplicated_isid": false, 00:20:35.390 "error_recovery_level": 0, 00:20:35.390 "nop_timeout": 60, 00:20:35.390 "nop_in_interval": 30, 00:20:35.390 "disable_chap": false, 00:20:35.390 "require_chap": false, 00:20:35.390 "mutual_chap": false, 00:20:35.390 "chap_group": 0, 00:20:35.390 "max_large_datain_per_connection": 64, 00:20:35.390 "max_r2t_per_connection": 4, 00:20:35.390 "pdu_pool_size": 36864, 00:20:35.390 "immediate_data_pool_size": 16384, 00:20:35.390 "data_out_pool_size": 2048 00:20:35.390 } 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 } 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57184 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57184 ']' 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57184 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57184 00:20:35.390 killing process with pid 57184 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57184' 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57184 00:20:35.390 17:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57184 00:20:38.684 17:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57240 00:20:38.684 17:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:20:38.684 17:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57240 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57240 ']' 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57240 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57240 00:20:43.995 killing process with pid 57240 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57240' 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57240 00:20:43.995 17:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57240 00:20:45.899 17:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:45.899 17:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:45.899 00:20:45.899 real 0m12.326s 00:20:45.899 user 0m11.491s 00:20:45.899 sys 0m1.265s 00:20:45.899 ************************************ 00:20:45.899 END TEST skip_rpc_with_json 00:20:45.899 ************************************ 00:20:45.899 17:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.899 17:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:45.899 17:17:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:20:45.899 17:17:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:45.899 17:17:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.899 17:17:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.899 ************************************ 00:20:45.899 START TEST skip_rpc_with_delay 00:20:45.899 ************************************ 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.899 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:46.158 [2024-11-26 17:17:16.140557] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.158 00:20:46.158 real 0m0.206s 00:20:46.158 user 0m0.091s 00:20:46.158 sys 0m0.111s 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.158 17:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:20:46.158 ************************************ 00:20:46.158 END TEST skip_rpc_with_delay 00:20:46.158 ************************************ 00:20:46.418 17:17:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:20:46.418 17:17:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:20:46.418 17:17:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:20:46.418 17:17:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.418 17:17:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.418 17:17:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:46.418 ************************************ 00:20:46.418 START TEST exit_on_failed_rpc_init 00:20:46.418 ************************************ 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57379 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57379 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57379 ']' 00:20:46.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.418 17:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:46.418 [2024-11-26 17:17:16.435655] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:46.418 [2024-11-26 17:17:16.435838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:20:46.677 [2024-11-26 17:17:16.638913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.969 [2024-11-26 17:17:16.789937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:20:47.906 17:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:48.165 [2024-11-26 17:17:18.044525] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:48.165 [2024-11-26 17:17:18.044943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57397 ] 00:20:48.165 [2024-11-26 17:17:18.232457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.423 [2024-11-26 17:17:18.381686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.423 [2024-11-26 17:17:18.381830] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:48.423 [2024-11-26 17:17:18.381850] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:48.423 [2024-11-26 17:17:18.381870] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57379 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57379 ']' 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57379 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57379 00:20:48.682 killing process with pid 57379 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57379' 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57379 00:20:48.682 17:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57379 00:20:52.039 ************************************ 00:20:52.039 END TEST exit_on_failed_rpc_init 00:20:52.039 ************************************ 00:20:52.039 00:20:52.039 real 0m5.189s 00:20:52.039 user 0m5.411s 00:20:52.039 sys 0m0.847s 00:20:52.039 17:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.039 17:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:52.039 17:17:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:52.039 00:20:52.039 real 0m26.092s 00:20:52.039 user 0m24.397s 00:20:52.039 sys 0m3.099s 00:20:52.039 17:17:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.039 ************************************ 00:20:52.039 17:17:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.039 END TEST skip_rpc 00:20:52.039 ************************************ 00:20:52.039 17:17:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:52.039 17:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.039 17:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.039 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.039 ************************************ 00:20:52.039 START TEST rpc_client 00:20:52.039 ************************************ 00:20:52.039 17:17:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:52.039 * Looking for test storage... 00:20:52.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:20:52.039 17:17:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.039 17:17:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.039 17:17:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.039 17:17:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:20:52.039 17:17:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.040 17:17:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.040 17:17:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.040 17:17:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.040 --rc genhtml_branch_coverage=1 00:20:52.040 --rc genhtml_function_coverage=1 00:20:52.040 --rc genhtml_legend=1 00:20:52.040 --rc geninfo_all_blocks=1 00:20:52.040 --rc geninfo_unexecuted_blocks=1 00:20:52.040 00:20:52.040 ' 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.040 --rc genhtml_branch_coverage=1 00:20:52.040 --rc genhtml_function_coverage=1 00:20:52.040 --rc genhtml_legend=1 00:20:52.040 --rc geninfo_all_blocks=1 00:20:52.040 --rc geninfo_unexecuted_blocks=1 00:20:52.040 00:20:52.040 ' 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.040 --rc genhtml_branch_coverage=1 00:20:52.040 --rc genhtml_function_coverage=1 00:20:52.040 --rc genhtml_legend=1 00:20:52.040 --rc geninfo_all_blocks=1 00:20:52.040 --rc geninfo_unexecuted_blocks=1 00:20:52.040 00:20:52.040 ' 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.040 --rc genhtml_branch_coverage=1 00:20:52.040 --rc genhtml_function_coverage=1 00:20:52.040 --rc genhtml_legend=1 00:20:52.040 --rc geninfo_all_blocks=1 00:20:52.040 --rc geninfo_unexecuted_blocks=1 00:20:52.040 00:20:52.040 ' 00:20:52.040 17:17:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:20:52.040 OK 00:20:52.040 17:17:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:20:52.040 00:20:52.040 real 0m0.339s 00:20:52.040 user 0m0.183s 00:20:52.040 sys 0m0.176s 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.040 17:17:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:20:52.040 ************************************ 00:20:52.040 END TEST rpc_client 00:20:52.040 ************************************ 00:20:52.040 17:17:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:52.040 17:17:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.040 17:17:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.040 17:17:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.040 ************************************ 00:20:52.040 START TEST json_config 00:20:52.040 ************************************ 00:20:52.040 17:17:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:52.040 17:17:22 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.040 17:17:22 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.040 17:17:22 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.300 17:17:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.300 17:17:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.300 17:17:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.300 17:17:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.300 17:17:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.300 17:17:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:20:52.300 17:17:22 json_config -- scripts/common.sh@345 -- # : 1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.300 17:17:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.300 17:17:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@353 -- # local d=1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.300 17:17:22 json_config -- scripts/common.sh@355 -- # echo 1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.300 17:17:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@353 -- # local d=2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.300 17:17:22 json_config -- scripts/common.sh@355 -- # echo 2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.300 17:17:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.300 17:17:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.300 17:17:22 json_config -- scripts/common.sh@368 -- # return 0 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.300 --rc genhtml_branch_coverage=1 00:20:52.300 --rc genhtml_function_coverage=1 00:20:52.300 --rc genhtml_legend=1 00:20:52.300 --rc geninfo_all_blocks=1 00:20:52.300 --rc geninfo_unexecuted_blocks=1 00:20:52.300 00:20:52.300 ' 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.300 --rc genhtml_branch_coverage=1 00:20:52.300 --rc genhtml_function_coverage=1 00:20:52.300 --rc genhtml_legend=1 00:20:52.300 --rc geninfo_all_blocks=1 00:20:52.300 --rc geninfo_unexecuted_blocks=1 00:20:52.300 00:20:52.300 ' 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.300 --rc genhtml_branch_coverage=1 00:20:52.300 --rc genhtml_function_coverage=1 00:20:52.300 --rc genhtml_legend=1 00:20:52.300 --rc geninfo_all_blocks=1 00:20:52.300 --rc geninfo_unexecuted_blocks=1 00:20:52.300 00:20:52.300 ' 00:20:52.300 17:17:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.300 --rc genhtml_branch_coverage=1 00:20:52.300 --rc genhtml_function_coverage=1 00:20:52.300 --rc genhtml_legend=1 00:20:52.300 --rc geninfo_all_blocks=1 00:20:52.300 --rc geninfo_unexecuted_blocks=1 00:20:52.300 00:20:52.300 ' 00:20:52.300 17:17:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:20:52.300 17:17:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.301 17:17:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.301 17:17:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.301 17:17:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.301 17:17:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.301 17:17:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.301 17:17:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.301 17:17:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.301 17:17:22 json_config -- paths/export.sh@5 -- # export PATH 00:20:52.301 17:17:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@51 -- # : 0 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.301 17:17:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:20:52.301 WARNING: No tests are enabled so not running JSON configuration tests 00:20:52.301 17:17:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:20:52.301 00:20:52.301 real 0m0.243s 00:20:52.301 user 0m0.130s 00:20:52.301 sys 0m0.110s 00:20:52.301 17:17:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.301 17:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:52.301 ************************************ 00:20:52.301 END TEST json_config 00:20:52.301 ************************************ 00:20:52.301 17:17:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:20:52.301 17:17:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.301 17:17:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.301 17:17:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.301 ************************************ 00:20:52.301 START TEST json_config_extra_key 00:20:52.301 ************************************ 00:20:52.301 17:17:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.561 --rc genhtml_branch_coverage=1 00:20:52.561 --rc genhtml_function_coverage=1 00:20:52.561 --rc genhtml_legend=1 00:20:52.561 --rc geninfo_all_blocks=1 00:20:52.561 --rc geninfo_unexecuted_blocks=1 00:20:52.561 00:20:52.561 ' 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.561 --rc genhtml_branch_coverage=1 00:20:52.561 --rc genhtml_function_coverage=1 00:20:52.561 --rc genhtml_legend=1 00:20:52.561 --rc geninfo_all_blocks=1 00:20:52.561 --rc geninfo_unexecuted_blocks=1 00:20:52.561 00:20:52.561 ' 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.561 --rc genhtml_branch_coverage=1 00:20:52.561 --rc genhtml_function_coverage=1 00:20:52.561 --rc genhtml_legend=1 00:20:52.561 --rc geninfo_all_blocks=1 00:20:52.561 --rc geninfo_unexecuted_blocks=1 00:20:52.561 00:20:52.561 ' 00:20:52.561 17:17:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.561 --rc genhtml_branch_coverage=1 00:20:52.561 --rc genhtml_function_coverage=1 00:20:52.561 --rc genhtml_legend=1 00:20:52.561 --rc geninfo_all_blocks=1 00:20:52.561 --rc geninfo_unexecuted_blocks=1 00:20:52.561 00:20:52.561 ' 00:20:52.561 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=efb88bfc-94fa-46e9-a548-d81a914b4dd7 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.561 17:17:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.561 17:17:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.561 17:17:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.561 17:17:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.561 17:17:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:20:52.561 17:17:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.561 17:17:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.562 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.562 17:17:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:20:52.562 INFO: launching applications... 00:20:52.562 17:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57618 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:20:52.562 Waiting for target to run... 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57618 /var/tmp/spdk_tgt.sock 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57618 ']' 00:20:52.562 17:17:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:52.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.562 17:17:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 [2024-11-26 17:17:22.731522] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:52.821 [2024-11-26 17:17:22.731966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57618 ] 00:20:53.390 [2024-11-26 17:17:23.333003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.390 [2024-11-26 17:17:23.468564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.325 00:20:54.325 INFO: shutting down applications... 00:20:54.325 17:17:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.325 17:17:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:20:54.325 17:17:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:20:54.325 17:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:20:54.325 17:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:20:54.325 17:17:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:20:54.325 17:17:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:20:54.325 17:17:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57618 ]] 00:20:54.326 17:17:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57618 00:20:54.326 17:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:20:54.326 17:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:54.326 17:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:54.326 17:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:54.893 17:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:54.893 17:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:54.893 17:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:54.893 17:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:55.460 17:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:55.460 17:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:55.460 17:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:55.460 17:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:55.719 17:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:55.719 17:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:55.719 17:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:55.719 17:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:56.286 17:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:56.286 17:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:56.286 17:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:56.286 17:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:56.855 17:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:56.855 17:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:56.855 17:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:56.855 17:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:57.464 17:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:57.464 17:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:57.464 17:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:57.464 17:17:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57618 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:20:57.733 SPDK target shutdown done 00:20:57.733 17:17:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:20:57.734 17:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:20:57.734 Success 00:20:57.734 00:20:57.734 real 0m5.494s 00:20:57.734 user 0m4.710s 00:20:57.734 sys 0m0.867s 00:20:57.992 17:17:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.992 ************************************ 00:20:57.992 END TEST json_config_extra_key 00:20:57.992 ************************************ 00:20:57.992 17:17:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:20:57.992 17:17:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:57.992 17:17:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.992 17:17:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.992 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.992 ************************************ 00:20:57.992 START TEST alias_rpc 00:20:57.992 ************************************ 00:20:57.993 17:17:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:57.993 * Looking for test storage... 00:20:57.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:20:57.993 17:17:28 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.993 17:17:28 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.993 17:17:28 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.251 17:17:28 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.251 17:17:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:20:58.252 17:17:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.252 17:17:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.252 17:17:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.252 17:17:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 17:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:20:58.252 17:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57736 00:20:58.252 17:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.252 17:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57736 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57736 ']' 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.252 17:17:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.252 [2024-11-26 17:17:28.284916] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:20:58.252 [2024-11-26 17:17:28.285300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57736 ] 00:20:58.511 [2024-11-26 17:17:28.473618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.771 [2024-11-26 17:17:28.625412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.723 17:17:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.723 17:17:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:59.723 17:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:20:59.981 17:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57736 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57736 ']' 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57736 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57736 00:20:59.981 killing process with pid 57736 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57736' 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 57736 00:20:59.981 17:17:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 57736 00:21:03.314 ************************************ 00:21:03.314 END TEST alias_rpc 00:21:03.314 ************************************ 00:21:03.314 00:21:03.314 real 0m4.876s 00:21:03.314 user 0m4.788s 00:21:03.314 sys 0m0.807s 00:21:03.314 17:17:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.314 17:17:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.314 17:17:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:21:03.314 17:17:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:21:03.314 17:17:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:03.314 17:17:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.314 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.314 ************************************ 00:21:03.314 START TEST spdkcli_tcp 00:21:03.314 ************************************ 00:21:03.314 17:17:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:21:03.314 * Looking for test storage... 00:21:03.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:03.314 17:17:32 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:03.314 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:21:03.314 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:03.314 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.314 17:17:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.315 17:17:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.315 --rc genhtml_branch_coverage=1 00:21:03.315 --rc genhtml_function_coverage=1 00:21:03.315 --rc genhtml_legend=1 00:21:03.315 --rc geninfo_all_blocks=1 00:21:03.315 --rc geninfo_unexecuted_blocks=1 00:21:03.315 00:21:03.315 ' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.315 --rc genhtml_branch_coverage=1 00:21:03.315 --rc genhtml_function_coverage=1 00:21:03.315 --rc genhtml_legend=1 00:21:03.315 --rc geninfo_all_blocks=1 00:21:03.315 --rc geninfo_unexecuted_blocks=1 00:21:03.315 00:21:03.315 ' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.315 --rc genhtml_branch_coverage=1 00:21:03.315 --rc genhtml_function_coverage=1 00:21:03.315 --rc genhtml_legend=1 00:21:03.315 --rc geninfo_all_blocks=1 00:21:03.315 --rc geninfo_unexecuted_blocks=1 00:21:03.315 00:21:03.315 ' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:03.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.315 --rc genhtml_branch_coverage=1 00:21:03.315 --rc genhtml_function_coverage=1 00:21:03.315 --rc genhtml_legend=1 00:21:03.315 --rc geninfo_all_blocks=1 00:21:03.315 --rc geninfo_unexecuted_blocks=1 00:21:03.315 00:21:03.315 ' 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57854 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57854 00:21:03.315 17:17:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57854 ']' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.315 17:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.315 [2024-11-26 17:17:33.229409] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:03.315 [2024-11-26 17:17:33.229779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57854 ] 00:21:03.315 [2024-11-26 17:17:33.414981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:03.575 [2024-11-26 17:17:33.568543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.575 [2024-11-26 17:17:33.568603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.954 17:17:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.955 17:17:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:21:04.955 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:21:04.955 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57877 00:21:04.955 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:21:04.955 [ 00:21:04.955 "bdev_malloc_delete", 00:21:04.955 "bdev_malloc_create", 00:21:04.955 "bdev_null_resize", 00:21:04.955 "bdev_null_delete", 00:21:04.955 "bdev_null_create", 00:21:04.955 "bdev_nvme_cuse_unregister", 00:21:04.955 "bdev_nvme_cuse_register", 00:21:04.955 "bdev_opal_new_user", 00:21:04.955 "bdev_opal_set_lock_state", 00:21:04.955 "bdev_opal_delete", 00:21:04.955 "bdev_opal_get_info", 00:21:04.955 "bdev_opal_create", 00:21:04.955 "bdev_nvme_opal_revert", 00:21:04.955 "bdev_nvme_opal_init", 00:21:04.955 "bdev_nvme_send_cmd", 00:21:04.955 "bdev_nvme_set_keys", 00:21:04.955 "bdev_nvme_get_path_iostat", 00:21:04.955 "bdev_nvme_get_mdns_discovery_info", 00:21:04.955 "bdev_nvme_stop_mdns_discovery", 00:21:04.955 "bdev_nvme_start_mdns_discovery", 00:21:04.955 "bdev_nvme_set_multipath_policy", 00:21:04.955 "bdev_nvme_set_preferred_path", 00:21:04.955 "bdev_nvme_get_io_paths", 00:21:04.955 "bdev_nvme_remove_error_injection", 00:21:04.955 "bdev_nvme_add_error_injection", 00:21:04.955 "bdev_nvme_get_discovery_info", 00:21:04.955 "bdev_nvme_stop_discovery", 00:21:04.955 "bdev_nvme_start_discovery", 00:21:04.955 "bdev_nvme_get_controller_health_info", 00:21:04.955 "bdev_nvme_disable_controller", 00:21:04.955 "bdev_nvme_enable_controller", 00:21:04.955 "bdev_nvme_reset_controller", 00:21:04.955 "bdev_nvme_get_transport_statistics", 00:21:04.955 "bdev_nvme_apply_firmware", 00:21:04.955 "bdev_nvme_detach_controller", 00:21:04.955 "bdev_nvme_get_controllers", 00:21:04.955 "bdev_nvme_attach_controller", 00:21:04.955 "bdev_nvme_set_hotplug", 00:21:04.955 "bdev_nvme_set_options", 00:21:04.955 "bdev_passthru_delete", 00:21:04.955 "bdev_passthru_create", 00:21:04.955 "bdev_lvol_set_parent_bdev", 00:21:04.955 "bdev_lvol_set_parent", 00:21:04.955 "bdev_lvol_check_shallow_copy", 00:21:04.955 "bdev_lvol_start_shallow_copy", 00:21:04.955 "bdev_lvol_grow_lvstore", 00:21:04.955 "bdev_lvol_get_lvols", 00:21:04.955 "bdev_lvol_get_lvstores", 00:21:04.955 "bdev_lvol_delete", 00:21:04.955 "bdev_lvol_set_read_only", 00:21:04.955 "bdev_lvol_resize", 00:21:04.955 "bdev_lvol_decouple_parent", 00:21:04.955 "bdev_lvol_inflate", 00:21:04.955 "bdev_lvol_rename", 00:21:04.955 "bdev_lvol_clone_bdev", 00:21:04.955 "bdev_lvol_clone", 00:21:04.955 "bdev_lvol_snapshot", 00:21:04.955 "bdev_lvol_create", 00:21:04.955 "bdev_lvol_delete_lvstore", 00:21:04.955 "bdev_lvol_rename_lvstore", 00:21:04.955 "bdev_lvol_create_lvstore", 00:21:04.955 "bdev_raid_set_options", 00:21:04.955 "bdev_raid_remove_base_bdev", 00:21:04.955 "bdev_raid_add_base_bdev", 00:21:04.955 "bdev_raid_delete", 00:21:04.955 "bdev_raid_create", 00:21:04.955 "bdev_raid_get_bdevs", 00:21:04.955 "bdev_error_inject_error", 00:21:04.955 "bdev_error_delete", 00:21:04.955 "bdev_error_create", 00:21:04.955 "bdev_split_delete", 00:21:04.955 "bdev_split_create", 00:21:04.955 "bdev_delay_delete", 00:21:04.955 "bdev_delay_create", 00:21:04.955 "bdev_delay_update_latency", 00:21:04.955 "bdev_zone_block_delete", 00:21:04.955 "bdev_zone_block_create", 00:21:04.955 "blobfs_create", 00:21:04.955 "blobfs_detect", 00:21:04.955 "blobfs_set_cache_size", 00:21:04.955 "bdev_aio_delete", 00:21:04.955 "bdev_aio_rescan", 00:21:04.955 "bdev_aio_create", 00:21:04.955 "bdev_ftl_set_property", 00:21:04.955 "bdev_ftl_get_properties", 00:21:04.955 "bdev_ftl_get_stats", 00:21:04.955 "bdev_ftl_unmap", 00:21:04.955 "bdev_ftl_unload", 00:21:04.955 "bdev_ftl_delete", 00:21:04.955 "bdev_ftl_load", 00:21:04.955 "bdev_ftl_create", 00:21:04.955 "bdev_virtio_attach_controller", 00:21:04.955 "bdev_virtio_scsi_get_devices", 00:21:04.955 "bdev_virtio_detach_controller", 00:21:04.955 "bdev_virtio_blk_set_hotplug", 00:21:04.955 "bdev_iscsi_delete", 00:21:04.955 "bdev_iscsi_create", 00:21:04.955 "bdev_iscsi_set_options", 00:21:04.955 "accel_error_inject_error", 00:21:04.955 "ioat_scan_accel_module", 00:21:04.955 "dsa_scan_accel_module", 00:21:04.955 "iaa_scan_accel_module", 00:21:04.955 "keyring_file_remove_key", 00:21:04.955 "keyring_file_add_key", 00:21:04.955 "keyring_linux_set_options", 00:21:04.955 "fsdev_aio_delete", 00:21:04.955 "fsdev_aio_create", 00:21:04.955 "iscsi_get_histogram", 00:21:04.955 "iscsi_enable_histogram", 00:21:04.955 "iscsi_set_options", 00:21:04.955 "iscsi_get_auth_groups", 00:21:04.955 "iscsi_auth_group_remove_secret", 00:21:04.955 "iscsi_auth_group_add_secret", 00:21:04.955 "iscsi_delete_auth_group", 00:21:04.955 "iscsi_create_auth_group", 00:21:04.955 "iscsi_set_discovery_auth", 00:21:04.955 "iscsi_get_options", 00:21:04.955 "iscsi_target_node_request_logout", 00:21:04.955 "iscsi_target_node_set_redirect", 00:21:04.955 "iscsi_target_node_set_auth", 00:21:04.955 "iscsi_target_node_add_lun", 00:21:04.955 "iscsi_get_stats", 00:21:04.955 "iscsi_get_connections", 00:21:04.955 "iscsi_portal_group_set_auth", 00:21:04.955 "iscsi_start_portal_group", 00:21:04.955 "iscsi_delete_portal_group", 00:21:04.955 "iscsi_create_portal_group", 00:21:04.955 "iscsi_get_portal_groups", 00:21:04.955 "iscsi_delete_target_node", 00:21:04.955 "iscsi_target_node_remove_pg_ig_maps", 00:21:04.955 "iscsi_target_node_add_pg_ig_maps", 00:21:04.955 "iscsi_create_target_node", 00:21:04.955 "iscsi_get_target_nodes", 00:21:04.955 "iscsi_delete_initiator_group", 00:21:04.955 "iscsi_initiator_group_remove_initiators", 00:21:04.955 "iscsi_initiator_group_add_initiators", 00:21:04.955 "iscsi_create_initiator_group", 00:21:04.955 "iscsi_get_initiator_groups", 00:21:04.955 "nvmf_set_crdt", 00:21:04.955 "nvmf_set_config", 00:21:04.955 "nvmf_set_max_subsystems", 00:21:04.955 "nvmf_stop_mdns_prr", 00:21:04.955 "nvmf_publish_mdns_prr", 00:21:04.955 "nvmf_subsystem_get_listeners", 00:21:04.955 "nvmf_subsystem_get_qpairs", 00:21:04.955 "nvmf_subsystem_get_controllers", 00:21:04.955 "nvmf_get_stats", 00:21:04.955 "nvmf_get_transports", 00:21:04.955 "nvmf_create_transport", 00:21:04.955 "nvmf_get_targets", 00:21:04.955 "nvmf_delete_target", 00:21:04.955 "nvmf_create_target", 00:21:04.955 "nvmf_subsystem_allow_any_host", 00:21:04.955 "nvmf_subsystem_set_keys", 00:21:04.955 "nvmf_subsystem_remove_host", 00:21:04.955 "nvmf_subsystem_add_host", 00:21:04.955 "nvmf_ns_remove_host", 00:21:04.955 "nvmf_ns_add_host", 00:21:04.955 "nvmf_subsystem_remove_ns", 00:21:04.955 "nvmf_subsystem_set_ns_ana_group", 00:21:04.955 "nvmf_subsystem_add_ns", 00:21:04.955 "nvmf_subsystem_listener_set_ana_state", 00:21:04.955 "nvmf_discovery_get_referrals", 00:21:04.955 "nvmf_discovery_remove_referral", 00:21:04.955 "nvmf_discovery_add_referral", 00:21:04.955 "nvmf_subsystem_remove_listener", 00:21:04.955 "nvmf_subsystem_add_listener", 00:21:04.955 "nvmf_delete_subsystem", 00:21:04.955 "nvmf_create_subsystem", 00:21:04.955 "nvmf_get_subsystems", 00:21:04.955 "env_dpdk_get_mem_stats", 00:21:04.955 "nbd_get_disks", 00:21:04.955 "nbd_stop_disk", 00:21:04.955 "nbd_start_disk", 00:21:04.955 "ublk_recover_disk", 00:21:04.955 "ublk_get_disks", 00:21:04.955 "ublk_stop_disk", 00:21:04.955 "ublk_start_disk", 00:21:04.955 "ublk_destroy_target", 00:21:04.955 "ublk_create_target", 00:21:04.955 "virtio_blk_create_transport", 00:21:04.955 "virtio_blk_get_transports", 00:21:04.955 "vhost_controller_set_coalescing", 00:21:04.955 "vhost_get_controllers", 00:21:04.955 "vhost_delete_controller", 00:21:04.955 "vhost_create_blk_controller", 00:21:04.955 "vhost_scsi_controller_remove_target", 00:21:04.955 "vhost_scsi_controller_add_target", 00:21:04.955 "vhost_start_scsi_controller", 00:21:04.955 "vhost_create_scsi_controller", 00:21:04.955 "thread_set_cpumask", 00:21:04.955 "scheduler_set_options", 00:21:04.955 "framework_get_governor", 00:21:04.955 "framework_get_scheduler", 00:21:04.955 "framework_set_scheduler", 00:21:04.956 "framework_get_reactors", 00:21:04.956 "thread_get_io_channels", 00:21:04.956 "thread_get_pollers", 00:21:04.956 "thread_get_stats", 00:21:04.956 "framework_monitor_context_switch", 00:21:04.956 "spdk_kill_instance", 00:21:04.956 "log_enable_timestamps", 00:21:04.956 "log_get_flags", 00:21:04.956 "log_clear_flag", 00:21:04.956 "log_set_flag", 00:21:04.956 "log_get_level", 00:21:04.956 "log_set_level", 00:21:04.956 "log_get_print_level", 00:21:04.956 "log_set_print_level", 00:21:04.956 "framework_enable_cpumask_locks", 00:21:04.956 "framework_disable_cpumask_locks", 00:21:04.956 "framework_wait_init", 00:21:04.956 "framework_start_init", 00:21:04.956 "scsi_get_devices", 00:21:04.956 "bdev_get_histogram", 00:21:04.956 "bdev_enable_histogram", 00:21:04.956 "bdev_set_qos_limit", 00:21:04.956 "bdev_set_qd_sampling_period", 00:21:04.956 "bdev_get_bdevs", 00:21:04.956 "bdev_reset_iostat", 00:21:04.956 "bdev_get_iostat", 00:21:04.956 "bdev_examine", 00:21:04.956 "bdev_wait_for_examine", 00:21:04.956 "bdev_set_options", 00:21:04.956 "accel_get_stats", 00:21:04.956 "accel_set_options", 00:21:04.956 "accel_set_driver", 00:21:04.956 "accel_crypto_key_destroy", 00:21:04.956 "accel_crypto_keys_get", 00:21:04.956 "accel_crypto_key_create", 00:21:04.956 "accel_assign_opc", 00:21:04.956 "accel_get_module_info", 00:21:04.956 "accel_get_opc_assignments", 00:21:04.956 "vmd_rescan", 00:21:04.956 "vmd_remove_device", 00:21:04.956 "vmd_enable", 00:21:04.956 "sock_get_default_impl", 00:21:04.956 "sock_set_default_impl", 00:21:04.956 "sock_impl_set_options", 00:21:04.956 "sock_impl_get_options", 00:21:04.956 "iobuf_get_stats", 00:21:04.956 "iobuf_set_options", 00:21:04.956 "keyring_get_keys", 00:21:04.956 "framework_get_pci_devices", 00:21:04.956 "framework_get_config", 00:21:04.956 "framework_get_subsystems", 00:21:04.956 "fsdev_set_opts", 00:21:04.956 "fsdev_get_opts", 00:21:04.956 "trace_get_info", 00:21:04.956 "trace_get_tpoint_group_mask", 00:21:04.956 "trace_disable_tpoint_group", 00:21:04.956 "trace_enable_tpoint_group", 00:21:04.956 "trace_clear_tpoint_mask", 00:21:04.956 "trace_set_tpoint_mask", 00:21:04.956 "notify_get_notifications", 00:21:04.956 "notify_get_types", 00:21:04.956 "spdk_get_version", 00:21:04.956 "rpc_get_methods" 00:21:04.956 ] 00:21:04.956 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:04.956 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:04.956 17:17:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57854 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57854 ']' 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57854 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57854 00:21:04.956 killing process with pid 57854 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57854' 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57854 00:21:04.956 17:17:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57854 00:21:08.300 ************************************ 00:21:08.300 END TEST spdkcli_tcp 00:21:08.300 ************************************ 00:21:08.300 00:21:08.300 real 0m4.940s 00:21:08.300 user 0m8.685s 00:21:08.300 sys 0m0.841s 00:21:08.300 17:17:37 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.300 17:17:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.300 17:17:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:08.300 17:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.300 17:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.300 17:17:37 -- common/autotest_common.sh@10 -- # set +x 00:21:08.300 ************************************ 00:21:08.300 START TEST dpdk_mem_utility 00:21:08.300 ************************************ 00:21:08.300 17:17:37 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:08.300 * Looking for test storage... 00:21:08.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:21:08.300 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:08.300 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:21:08.300 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:08.300 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:21:08.300 17:17:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.301 17:17:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:08.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.301 --rc genhtml_branch_coverage=1 00:21:08.301 --rc genhtml_function_coverage=1 00:21:08.301 --rc genhtml_legend=1 00:21:08.301 --rc geninfo_all_blocks=1 00:21:08.301 --rc geninfo_unexecuted_blocks=1 00:21:08.301 00:21:08.301 ' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:08.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.301 --rc genhtml_branch_coverage=1 00:21:08.301 --rc genhtml_function_coverage=1 00:21:08.301 --rc genhtml_legend=1 00:21:08.301 --rc geninfo_all_blocks=1 00:21:08.301 --rc geninfo_unexecuted_blocks=1 00:21:08.301 00:21:08.301 ' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:08.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.301 --rc genhtml_branch_coverage=1 00:21:08.301 --rc genhtml_function_coverage=1 00:21:08.301 --rc genhtml_legend=1 00:21:08.301 --rc geninfo_all_blocks=1 00:21:08.301 --rc geninfo_unexecuted_blocks=1 00:21:08.301 00:21:08.301 ' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:08.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.301 --rc genhtml_branch_coverage=1 00:21:08.301 --rc genhtml_function_coverage=1 00:21:08.301 --rc genhtml_legend=1 00:21:08.301 --rc geninfo_all_blocks=1 00:21:08.301 --rc geninfo_unexecuted_blocks=1 00:21:08.301 00:21:08.301 ' 00:21:08.301 17:17:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:08.301 17:17:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57982 00:21:08.301 17:17:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57982 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57982 ']' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.301 17:17:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:08.301 17:17:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:08.301 [2024-11-26 17:17:38.256747] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:08.301 [2024-11-26 17:17:38.257256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57982 ] 00:21:08.559 [2024-11-26 17:17:38.444204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.559 [2024-11-26 17:17:38.626993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.934 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.934 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:21:09.934 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:21:09.934 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:21:09.934 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.934 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:09.934 { 00:21:09.934 "filename": "/tmp/spdk_mem_dump.txt" 00:21:09.934 } 00:21:09.934 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.934 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:09.934 DPDK memory size 824.000000 MiB in 1 heap(s) 00:21:09.934 1 heaps totaling size 824.000000 MiB 00:21:09.934 size: 824.000000 MiB heap id: 0 00:21:09.934 end heaps---------- 00:21:09.934 9 mempools totaling size 603.782043 MiB 00:21:09.934 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:21:09.934 size: 158.602051 MiB name: PDU_data_out_Pool 00:21:09.934 size: 100.555481 MiB name: bdev_io_57982 00:21:09.934 size: 50.003479 MiB name: msgpool_57982 00:21:09.934 size: 36.509338 MiB name: fsdev_io_57982 00:21:09.934 size: 21.763794 MiB name: PDU_Pool 00:21:09.934 size: 19.513306 MiB name: SCSI_TASK_Pool 00:21:09.934 size: 4.133484 MiB name: evtpool_57982 00:21:09.934 size: 0.026123 MiB name: Session_Pool 00:21:09.934 end mempools------- 00:21:09.934 6 memzones totaling size 4.142822 MiB 00:21:09.934 size: 1.000366 MiB name: RG_ring_0_57982 00:21:09.934 size: 1.000366 MiB name: RG_ring_1_57982 00:21:09.934 size: 1.000366 MiB name: RG_ring_4_57982 00:21:09.934 size: 1.000366 MiB name: RG_ring_5_57982 00:21:09.934 size: 0.125366 MiB name: RG_ring_2_57982 00:21:09.934 size: 0.015991 MiB name: RG_ring_3_57982 00:21:09.934 end memzones------- 00:21:09.934 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:21:09.934 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:21:09.934 list of free elements. size: 16.779907 MiB 00:21:09.934 element at address: 0x200006400000 with size: 1.995972 MiB 00:21:09.934 element at address: 0x20000a600000 with size: 1.995972 MiB 00:21:09.934 element at address: 0x200003e00000 with size: 1.991028 MiB 00:21:09.934 element at address: 0x200019500040 with size: 0.999939 MiB 00:21:09.934 element at address: 0x200019900040 with size: 0.999939 MiB 00:21:09.934 element at address: 0x200019a00000 with size: 0.999084 MiB 00:21:09.934 element at address: 0x200032600000 with size: 0.994324 MiB 00:21:09.934 element at address: 0x200000400000 with size: 0.992004 MiB 00:21:09.934 element at address: 0x200019200000 with size: 0.959656 MiB 00:21:09.934 element at address: 0x200019d00040 with size: 0.936401 MiB 00:21:09.934 element at address: 0x200000200000 with size: 0.716980 MiB 00:21:09.934 element at address: 0x20001b400000 with size: 0.561218 MiB 00:21:09.934 element at address: 0x200000c00000 with size: 0.489197 MiB 00:21:09.934 element at address: 0x200019600000 with size: 0.488220 MiB 00:21:09.934 element at address: 0x200019e00000 with size: 0.485413 MiB 00:21:09.934 element at address: 0x200012c00000 with size: 0.433228 MiB 00:21:09.935 element at address: 0x200028800000 with size: 0.390442 MiB 00:21:09.935 element at address: 0x200000800000 with size: 0.350891 MiB 00:21:09.935 list of standard malloc elements. size: 199.289185 MiB 00:21:09.935 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:21:09.935 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:21:09.935 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:21:09.935 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:21:09.935 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:21:09.935 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:21:09.935 element at address: 0x200019deff40 with size: 0.062683 MiB 00:21:09.935 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:21:09.935 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:21:09.935 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:21:09.935 element at address: 0x200012bff040 with size: 0.000305 MiB 00:21:09.935 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200000cff000 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff180 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff280 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff380 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff480 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff580 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff680 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff780 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff880 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bff980 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200019affc40 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:21:09.935 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:21:09.936 element at address: 0x200028863f40 with size: 0.000244 MiB 00:21:09.936 element at address: 0x200028864040 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886af80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b080 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b180 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b280 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b380 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b480 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b580 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b680 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b780 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b880 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886b980 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886be80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c080 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c180 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c280 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c380 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c480 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c580 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c680 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c780 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c880 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886c980 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d080 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d180 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d280 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d380 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d480 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d580 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d680 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d780 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d880 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886d980 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886da80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886db80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886de80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886df80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e080 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e180 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e280 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e380 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e480 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e580 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e680 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e780 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e880 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886e980 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f080 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f180 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f280 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f380 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f480 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f580 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f680 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f780 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f880 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886f980 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:21:09.936 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:21:09.936 list of memzone associated elements. size: 607.930908 MiB 00:21:09.936 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:21:09.936 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:21:09.936 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:21:09.936 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:21:09.936 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:21:09.936 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57982_0 00:21:09.936 element at address: 0x200000dff340 with size: 48.003113 MiB 00:21:09.936 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57982_0 00:21:09.936 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:21:09.936 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57982_0 00:21:09.936 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:21:09.936 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:21:09.936 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:21:09.936 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:21:09.936 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:21:09.936 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57982_0 00:21:09.936 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:21:09.936 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57982 00:21:09.936 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:21:09.936 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57982 00:21:09.936 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:21:09.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:21:09.936 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:21:09.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:21:09.936 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:21:09.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:21:09.936 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:21:09.936 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:21:09.936 element at address: 0x200000cff100 with size: 1.000549 MiB 00:21:09.936 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57982 00:21:09.936 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:21:09.936 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57982 00:21:09.936 element at address: 0x200019affd40 with size: 1.000549 MiB 00:21:09.936 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57982 00:21:09.936 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:21:09.936 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57982 00:21:09.936 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:21:09.936 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57982 00:21:09.936 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:21:09.936 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57982 00:21:09.936 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:21:09.936 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:21:09.936 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:21:09.936 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:21:09.936 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:21:09.936 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:21:09.936 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:21:09.936 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57982 00:21:09.936 element at address: 0x20000085df80 with size: 0.125549 MiB 00:21:09.936 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57982 00:21:09.936 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:21:09.936 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:21:09.936 element at address: 0x200028864140 with size: 0.023804 MiB 00:21:09.936 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:21:09.936 element at address: 0x200000859d40 with size: 0.016174 MiB 00:21:09.936 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57982 00:21:09.936 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:21:09.936 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:21:09.936 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:21:09.936 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57982 00:21:09.936 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:21:09.936 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57982 00:21:09.936 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:21:09.936 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57982 00:21:09.936 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:21:09.936 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:21:09.936 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:21:09.936 17:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57982 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57982 ']' 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57982 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57982 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.936 killing process with pid 57982 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57982' 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57982 00:21:09.936 17:17:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57982 00:21:12.471 00:21:12.471 real 0m4.641s 00:21:12.471 user 0m4.452s 00:21:12.471 sys 0m0.806s 00:21:12.471 17:17:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.471 ************************************ 00:21:12.471 END TEST dpdk_mem_utility 00:21:12.471 ************************************ 00:21:12.471 17:17:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:12.730 17:17:42 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:12.730 17:17:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:12.730 17:17:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.730 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.730 ************************************ 00:21:12.730 START TEST event 00:21:12.730 ************************************ 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:12.730 * Looking for test storage... 00:21:12.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1693 -- # lcov --version 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:12.730 17:17:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.730 17:17:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.730 17:17:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.730 17:17:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.730 17:17:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.730 17:17:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.730 17:17:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.730 17:17:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.730 17:17:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.730 17:17:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.730 17:17:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.730 17:17:42 event -- scripts/common.sh@344 -- # case "$op" in 00:21:12.730 17:17:42 event -- scripts/common.sh@345 -- # : 1 00:21:12.730 17:17:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.730 17:17:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.730 17:17:42 event -- scripts/common.sh@365 -- # decimal 1 00:21:12.730 17:17:42 event -- scripts/common.sh@353 -- # local d=1 00:21:12.730 17:17:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.730 17:17:42 event -- scripts/common.sh@355 -- # echo 1 00:21:12.730 17:17:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.730 17:17:42 event -- scripts/common.sh@366 -- # decimal 2 00:21:12.730 17:17:42 event -- scripts/common.sh@353 -- # local d=2 00:21:12.730 17:17:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.730 17:17:42 event -- scripts/common.sh@355 -- # echo 2 00:21:12.730 17:17:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.730 17:17:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.730 17:17:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.730 17:17:42 event -- scripts/common.sh@368 -- # return 0 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.730 17:17:42 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:12.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.731 --rc genhtml_branch_coverage=1 00:21:12.731 --rc genhtml_function_coverage=1 00:21:12.731 --rc genhtml_legend=1 00:21:12.731 --rc geninfo_all_blocks=1 00:21:12.731 --rc geninfo_unexecuted_blocks=1 00:21:12.731 00:21:12.731 ' 00:21:12.731 17:17:42 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:12.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.731 --rc genhtml_branch_coverage=1 00:21:12.731 --rc genhtml_function_coverage=1 00:21:12.731 --rc genhtml_legend=1 00:21:12.731 --rc geninfo_all_blocks=1 00:21:12.731 --rc geninfo_unexecuted_blocks=1 00:21:12.731 00:21:12.731 ' 00:21:12.731 17:17:42 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:12.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.731 --rc genhtml_branch_coverage=1 00:21:12.731 --rc genhtml_function_coverage=1 00:21:12.731 --rc genhtml_legend=1 00:21:12.731 --rc geninfo_all_blocks=1 00:21:12.731 --rc geninfo_unexecuted_blocks=1 00:21:12.731 00:21:12.731 ' 00:21:12.731 17:17:42 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:12.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.731 --rc genhtml_branch_coverage=1 00:21:12.731 --rc genhtml_function_coverage=1 00:21:12.731 --rc genhtml_legend=1 00:21:12.731 --rc geninfo_all_blocks=1 00:21:12.731 --rc geninfo_unexecuted_blocks=1 00:21:12.731 00:21:12.731 ' 00:21:12.731 17:17:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:12.731 17:17:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:21:12.731 17:17:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:12.731 17:17:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:21:12.731 17:17:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.731 17:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:21:12.731 ************************************ 00:21:12.731 START TEST event_perf 00:21:12.731 ************************************ 00:21:12.731 17:17:42 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:12.990 Running I/O for 1 seconds...[2024-11-26 17:17:42.883794] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:12.990 [2024-11-26 17:17:42.883929] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58102 ] 00:21:12.990 [2024-11-26 17:17:43.074649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.250 [2024-11-26 17:17:43.226374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.250 [2024-11-26 17:17:43.226490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.250 Running I/O for 1 seconds...[2024-11-26 17:17:43.226685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.250 [2024-11-26 17:17:43.226726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.626 00:21:14.626 lcore 0: 113278 00:21:14.626 lcore 1: 113277 00:21:14.626 lcore 2: 113278 00:21:14.626 lcore 3: 113279 00:21:14.626 done. 00:21:14.626 00:21:14.626 real 0m1.671s 00:21:14.626 user 0m4.384s 00:21:14.626 sys 0m0.157s 00:21:14.627 17:17:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.627 17:17:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.627 ************************************ 00:21:14.627 END TEST event_perf 00:21:14.627 ************************************ 00:21:14.627 17:17:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:14.627 17:17:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:14.627 17:17:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.627 17:17:44 event -- common/autotest_common.sh@10 -- # set +x 00:21:14.627 ************************************ 00:21:14.627 START TEST event_reactor 00:21:14.627 ************************************ 00:21:14.627 17:17:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:14.627 [2024-11-26 17:17:44.630543] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:14.627 [2024-11-26 17:17:44.630691] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:21:14.885 [2024-11-26 17:17:44.814706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.885 [2024-11-26 17:17:44.958057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.264 test_start 00:21:16.264 oneshot 00:21:16.264 tick 100 00:21:16.264 tick 100 00:21:16.264 tick 250 00:21:16.264 tick 100 00:21:16.264 tick 100 00:21:16.264 tick 100 00:21:16.264 tick 250 00:21:16.264 tick 500 00:21:16.265 tick 100 00:21:16.265 tick 100 00:21:16.265 tick 250 00:21:16.265 tick 100 00:21:16.265 tick 100 00:21:16.265 test_end 00:21:16.265 00:21:16.265 real 0m1.619s 00:21:16.265 user 0m1.386s 00:21:16.265 sys 0m0.124s 00:21:16.265 17:17:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.265 ************************************ 00:21:16.265 END TEST event_reactor 00:21:16.265 ************************************ 00:21:16.265 17:17:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:21:16.265 17:17:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:16.265 17:17:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:16.265 17:17:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.265 17:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:21:16.265 ************************************ 00:21:16.265 START TEST event_reactor_perf 00:21:16.265 ************************************ 00:21:16.265 17:17:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:16.265 [2024-11-26 17:17:46.322159] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:16.265 [2024-11-26 17:17:46.322292] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:21:16.524 [2024-11-26 17:17:46.505527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.783 [2024-11-26 17:17:46.660669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.157 test_start 00:21:18.157 test_end 00:21:18.157 Performance: 339889 events per second 00:21:18.157 00:21:18.157 real 0m1.647s 00:21:18.157 user 0m1.421s 00:21:18.157 sys 0m0.115s 00:21:18.157 ************************************ 00:21:18.157 END TEST event_reactor_perf 00:21:18.157 ************************************ 00:21:18.157 17:17:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.157 17:17:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.157 17:17:47 event -- event/event.sh@49 -- # uname -s 00:21:18.157 17:17:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:21:18.157 17:17:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:18.157 17:17:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:18.157 17:17:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.157 17:17:47 event -- common/autotest_common.sh@10 -- # set +x 00:21:18.157 ************************************ 00:21:18.157 START TEST event_scheduler 00:21:18.157 ************************************ 00:21:18.157 17:17:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:18.157 * Looking for test storage... 00:21:18.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.157 17:17:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:18.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.157 --rc genhtml_branch_coverage=1 00:21:18.157 --rc genhtml_function_coverage=1 00:21:18.157 --rc genhtml_legend=1 00:21:18.157 --rc geninfo_all_blocks=1 00:21:18.157 --rc geninfo_unexecuted_blocks=1 00:21:18.157 00:21:18.157 ' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:18.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.157 --rc genhtml_branch_coverage=1 00:21:18.157 --rc genhtml_function_coverage=1 00:21:18.157 --rc genhtml_legend=1 00:21:18.157 --rc geninfo_all_blocks=1 00:21:18.157 --rc geninfo_unexecuted_blocks=1 00:21:18.157 00:21:18.157 ' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:18.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.157 --rc genhtml_branch_coverage=1 00:21:18.157 --rc genhtml_function_coverage=1 00:21:18.157 --rc genhtml_legend=1 00:21:18.157 --rc geninfo_all_blocks=1 00:21:18.157 --rc geninfo_unexecuted_blocks=1 00:21:18.157 00:21:18.157 ' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:18.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.157 --rc genhtml_branch_coverage=1 00:21:18.157 --rc genhtml_function_coverage=1 00:21:18.157 --rc genhtml_legend=1 00:21:18.157 --rc geninfo_all_blocks=1 00:21:18.157 --rc geninfo_unexecuted_blocks=1 00:21:18.157 00:21:18.157 ' 00:21:18.157 17:17:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:21:18.157 17:17:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58254 00:21:18.157 17:17:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:21:18.157 17:17:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:21:18.157 17:17:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58254 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58254 ']' 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.157 17:17:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.158 17:17:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.158 17:17:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.158 17:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:18.416 [2024-11-26 17:17:48.329705] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:18.416 [2024-11-26 17:17:48.330060] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:21:18.416 [2024-11-26 17:17:48.526928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.675 [2024-11-26 17:17:48.663614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.675 [2024-11-26 17:17:48.663787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.675 [2024-11-26 17:17:48.663952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.675 [2024-11-26 17:17:48.663974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:21:19.243 17:17:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.243 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.243 POWER: Cannot set governor of lcore 0 to performance 00:21:19.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.243 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.243 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.243 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:21:19.243 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:21:19.243 POWER: Unable to set Power Management Environment for lcore 0 00:21:19.243 [2024-11-26 17:17:49.306218] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:21:19.243 [2024-11-26 17:17:49.306250] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:21:19.243 [2024-11-26 17:17:49.306264] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:21:19.243 [2024-11-26 17:17:49.306293] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:21:19.243 [2024-11-26 17:17:49.306306] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:21:19.243 [2024-11-26 17:17:49.306320] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.243 17:17:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.243 17:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 [2024-11-26 17:17:49.723643] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:21:19.810 17:17:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:21:19.810 17:17:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.810 17:17:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 ************************************ 00:21:19.810 START TEST scheduler_create_thread 00:21:19.810 ************************************ 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 2 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 3 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 4 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 5 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 6 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 7 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 8 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.810 9 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.810 17:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:20.747 10 00:21:20.747 17:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.747 17:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:21:20.747 17:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.747 17:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:22.125 17:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.125 17:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:21:22.125 17:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:21:22.125 17:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.125 17:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:22.692 17:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.692 17:17:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:21:22.692 17:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.692 17:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:23.628 17:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.628 17:17:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:21:23.628 17:17:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:21:23.628 17:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.628 17:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:24.194 ************************************ 00:21:24.194 END TEST scheduler_create_thread 00:21:24.194 ************************************ 00:21:24.194 17:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.194 00:21:24.194 real 0m4.389s 00:21:24.194 user 0m0.026s 00:21:24.194 sys 0m0.011s 00:21:24.194 17:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.194 17:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:24.194 17:17:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:24.194 17:17:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58254 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58254 ']' 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58254 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58254 00:21:24.194 killing process with pid 58254 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58254' 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58254 00:21:24.194 17:17:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58254 00:21:24.452 [2024-11-26 17:17:54.406137] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:21:25.861 00:21:25.861 real 0m7.826s 00:21:25.861 user 0m18.218s 00:21:25.861 sys 0m0.687s 00:21:25.861 17:17:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.861 17:17:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 ************************************ 00:21:25.861 END TEST event_scheduler 00:21:25.861 ************************************ 00:21:25.861 17:17:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:21:25.861 17:17:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:21:25.861 17:17:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.861 17:17:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.861 17:17:55 event -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 ************************************ 00:21:25.861 START TEST app_repeat 00:21:25.861 ************************************ 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58382 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:21:25.861 Process app_repeat pid: 58382 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58382' 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:25.861 spdk_app_start Round 0 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:21:25.861 17:17:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58382 /var/tmp/spdk-nbd.sock 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58382 ']' 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:25.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.861 17:17:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 [2024-11-26 17:17:55.970729] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:25.861 [2024-11-26 17:17:55.970886] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58382 ] 00:21:26.119 [2024-11-26 17:17:56.162234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:26.378 [2024-11-26 17:17:56.320482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.378 [2024-11-26 17:17:56.320561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.945 17:17:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.945 17:17:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:26.945 17:17:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:27.204 Malloc0 00:21:27.204 17:17:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:27.463 Malloc1 00:21:27.722 17:17:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:27.722 /dev/nbd0 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:27.722 17:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:27.981 1+0 records in 00:21:27.981 1+0 records out 00:21:27.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908325 s, 4.5 MB/s 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:27.981 17:17:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:27.981 17:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.981 17:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:27.981 17:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:28.239 /dev/nbd1 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:28.239 1+0 records in 00:21:28.239 1+0 records out 00:21:28.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046287 s, 8.8 MB/s 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:28.239 17:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.239 17:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:28.499 17:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:28.499 { 00:21:28.499 "nbd_device": "/dev/nbd0", 00:21:28.500 "bdev_name": "Malloc0" 00:21:28.500 }, 00:21:28.500 { 00:21:28.500 "nbd_device": "/dev/nbd1", 00:21:28.500 "bdev_name": "Malloc1" 00:21:28.500 } 00:21:28.500 ]' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:28.500 { 00:21:28.500 "nbd_device": "/dev/nbd0", 00:21:28.500 "bdev_name": "Malloc0" 00:21:28.500 }, 00:21:28.500 { 00:21:28.500 "nbd_device": "/dev/nbd1", 00:21:28.500 "bdev_name": "Malloc1" 00:21:28.500 } 00:21:28.500 ]' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:28.500 /dev/nbd1' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:28.500 /dev/nbd1' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:28.500 256+0 records in 00:21:28.500 256+0 records out 00:21:28.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177745 s, 59.0 MB/s 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:28.500 256+0 records in 00:21:28.500 256+0 records out 00:21:28.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0388636 s, 27.0 MB/s 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:28.500 256+0 records in 00:21:28.500 256+0 records out 00:21:28.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379338 s, 27.6 MB/s 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:28.500 17:17:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.759 17:17:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.016 17:17:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:29.276 17:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:29.535 17:17:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:29.535 17:17:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:30.103 17:17:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:31.478 [2024-11-26 17:18:01.371589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:31.479 [2024-11-26 17:18:01.521213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.479 [2024-11-26 17:18:01.521228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.737 [2024-11-26 17:18:01.755991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:31.737 [2024-11-26 17:18:01.756121] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:33.115 spdk_app_start Round 1 00:21:33.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:33.115 17:18:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:33.115 17:18:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:21:33.115 17:18:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58382 /var/tmp/spdk-nbd.sock 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58382 ']' 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.115 17:18:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:33.374 17:18:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.374 17:18:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:33.374 17:18:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:33.633 Malloc0 00:21:33.633 17:18:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:33.893 Malloc1 00:21:33.893 17:18:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:33.893 17:18:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:34.152 /dev/nbd0 00:21:34.152 17:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:34.153 17:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:34.153 1+0 records in 00:21:34.153 1+0 records out 00:21:34.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546942 s, 7.5 MB/s 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:34.153 17:18:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:34.153 17:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:34.153 17:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:34.153 17:18:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:34.412 /dev/nbd1 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:34.412 1+0 records in 00:21:34.412 1+0 records out 00:21:34.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342942 s, 11.9 MB/s 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:34.412 17:18:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:34.412 17:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:34.671 17:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:34.671 { 00:21:34.671 "nbd_device": "/dev/nbd0", 00:21:34.671 "bdev_name": "Malloc0" 00:21:34.671 }, 00:21:34.671 { 00:21:34.671 "nbd_device": "/dev/nbd1", 00:21:34.671 "bdev_name": "Malloc1" 00:21:34.671 } 00:21:34.671 ]' 00:21:34.671 17:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:34.671 { 00:21:34.671 "nbd_device": "/dev/nbd0", 00:21:34.671 "bdev_name": "Malloc0" 00:21:34.671 }, 00:21:34.671 { 00:21:34.671 "nbd_device": "/dev/nbd1", 00:21:34.671 "bdev_name": "Malloc1" 00:21:34.671 } 00:21:34.671 ]' 00:21:34.671 17:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:34.929 /dev/nbd1' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:34.929 /dev/nbd1' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:34.929 256+0 records in 00:21:34.929 256+0 records out 00:21:34.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135114 s, 77.6 MB/s 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:34.929 256+0 records in 00:21:34.929 256+0 records out 00:21:34.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0370291 s, 28.3 MB/s 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:34.929 256+0 records in 00:21:34.929 256+0 records out 00:21:34.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329927 s, 31.8 MB/s 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:34.929 17:18:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:34.930 17:18:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:35.188 17:18:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.446 17:18:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:35.705 17:18:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:35.705 17:18:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:36.274 17:18:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:37.650 [2024-11-26 17:18:07.582438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:37.650 [2024-11-26 17:18:07.746148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.650 [2024-11-26 17:18:07.746178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.910 [2024-11-26 17:18:07.996836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:37.910 [2024-11-26 17:18:07.996961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:39.315 spdk_app_start Round 2 00:21:39.315 17:18:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:39.315 17:18:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:21:39.315 17:18:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58382 /var/tmp/spdk-nbd.sock 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58382 ']' 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.315 17:18:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:39.573 17:18:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.573 17:18:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:39.573 17:18:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:39.832 Malloc0 00:21:39.832 17:18:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:40.399 Malloc1 00:21:40.399 17:18:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:40.399 /dev/nbd0 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:40.399 17:18:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:40.399 17:18:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:40.399 17:18:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:40.399 17:18:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:40.399 17:18:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:40.399 17:18:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:40.658 1+0 records in 00:21:40.658 1+0 records out 00:21:40.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323325 s, 12.7 MB/s 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:40.658 17:18:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:40.658 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.658 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.658 17:18:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:40.917 /dev/nbd1 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:40.917 1+0 records in 00:21:40.917 1+0 records out 00:21:40.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476054 s, 8.6 MB/s 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:40.917 17:18:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.917 17:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:41.175 { 00:21:41.175 "nbd_device": "/dev/nbd0", 00:21:41.175 "bdev_name": "Malloc0" 00:21:41.175 }, 00:21:41.175 { 00:21:41.175 "nbd_device": "/dev/nbd1", 00:21:41.175 "bdev_name": "Malloc1" 00:21:41.175 } 00:21:41.175 ]' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:41.175 { 00:21:41.175 "nbd_device": "/dev/nbd0", 00:21:41.175 "bdev_name": "Malloc0" 00:21:41.175 }, 00:21:41.175 { 00:21:41.175 "nbd_device": "/dev/nbd1", 00:21:41.175 "bdev_name": "Malloc1" 00:21:41.175 } 00:21:41.175 ]' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:41.175 /dev/nbd1' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:41.175 /dev/nbd1' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:41.175 17:18:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:41.175 256+0 records in 00:21:41.175 256+0 records out 00:21:41.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012792 s, 82.0 MB/s 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:41.176 256+0 records in 00:21:41.176 256+0 records out 00:21:41.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337928 s, 31.0 MB/s 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:41.176 256+0 records in 00:21:41.176 256+0 records out 00:21:41.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375304 s, 27.9 MB/s 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.176 17:18:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.434 17:18:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:41.692 17:18:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:41.952 17:18:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:41.952 17:18:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:42.520 17:18:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:43.898 [2024-11-26 17:18:13.835736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:43.898 [2024-11-26 17:18:13.979799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.898 [2024-11-26 17:18:13.979800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.157 [2024-11-26 17:18:14.218912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:44.157 [2024-11-26 17:18:14.219029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:45.533 17:18:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58382 /var/tmp/spdk-nbd.sock 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58382 ']' 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.533 17:18:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:45.790 17:18:15 event.app_repeat -- event/event.sh@39 -- # killprocess 58382 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58382 ']' 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58382 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.790 17:18:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58382 00:21:46.046 17:18:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.046 17:18:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.046 17:18:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58382' 00:21:46.046 killing process with pid 58382 00:21:46.046 17:18:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58382 00:21:46.046 17:18:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58382 00:21:47.421 spdk_app_start is called in Round 0. 00:21:47.421 Shutdown signal received, stop current app iteration 00:21:47.421 Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 reinitialization... 00:21:47.421 spdk_app_start is called in Round 1. 00:21:47.421 Shutdown signal received, stop current app iteration 00:21:47.421 Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 reinitialization... 00:21:47.421 spdk_app_start is called in Round 2. 00:21:47.421 Shutdown signal received, stop current app iteration 00:21:47.421 Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 reinitialization... 00:21:47.421 spdk_app_start is called in Round 3. 00:21:47.421 Shutdown signal received, stop current app iteration 00:21:47.421 17:18:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:21:47.421 17:18:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:21:47.421 00:21:47.421 real 0m21.290s 00:21:47.421 user 0m45.414s 00:21:47.421 sys 0m3.717s 00:21:47.421 17:18:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.421 17:18:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 ************************************ 00:21:47.421 END TEST app_repeat 00:21:47.421 ************************************ 00:21:47.421 17:18:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:21:47.421 17:18:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:21:47.421 17:18:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.421 17:18:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.421 17:18:17 event -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 ************************************ 00:21:47.421 START TEST cpu_locks 00:21:47.421 ************************************ 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:21:47.421 * Looking for test storage... 00:21:47.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.421 17:18:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.421 --rc genhtml_branch_coverage=1 00:21:47.421 --rc genhtml_function_coverage=1 00:21:47.421 --rc genhtml_legend=1 00:21:47.421 --rc geninfo_all_blocks=1 00:21:47.421 --rc geninfo_unexecuted_blocks=1 00:21:47.421 00:21:47.421 ' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.421 --rc genhtml_branch_coverage=1 00:21:47.421 --rc genhtml_function_coverage=1 00:21:47.421 --rc genhtml_legend=1 00:21:47.421 --rc geninfo_all_blocks=1 00:21:47.421 --rc geninfo_unexecuted_blocks=1 00:21:47.421 00:21:47.421 ' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.421 --rc genhtml_branch_coverage=1 00:21:47.421 --rc genhtml_function_coverage=1 00:21:47.421 --rc genhtml_legend=1 00:21:47.421 --rc geninfo_all_blocks=1 00:21:47.421 --rc geninfo_unexecuted_blocks=1 00:21:47.421 00:21:47.421 ' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.421 --rc genhtml_branch_coverage=1 00:21:47.421 --rc genhtml_function_coverage=1 00:21:47.421 --rc genhtml_legend=1 00:21:47.421 --rc geninfo_all_blocks=1 00:21:47.421 --rc geninfo_unexecuted_blocks=1 00:21:47.421 00:21:47.421 ' 00:21:47.421 17:18:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:21:47.421 17:18:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:21:47.421 17:18:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:21:47.421 17:18:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.421 17:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 ************************************ 00:21:47.421 START TEST default_locks 00:21:47.421 ************************************ 00:21:47.421 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:21:47.421 17:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58855 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58855 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58855 ']' 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.422 17:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:47.680 [2024-11-26 17:18:17.641655] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:47.680 [2024-11-26 17:18:17.641821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:21:47.938 [2024-11-26 17:18:17.830224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.938 [2024-11-26 17:18:17.987445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.310 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.310 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:21:49.310 17:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58855 00:21:49.310 17:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58855 00:21:49.310 17:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:49.568 17:18:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58855 00:21:49.568 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58855 ']' 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58855 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58855 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.569 killing process with pid 58855 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58855' 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58855 00:21:49.569 17:18:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58855 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58855 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58855 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58855 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58855 ']' 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:52.855 ERROR: process (pid: 58855) is no longer running 00:21:52.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58855) - No such process 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:21:52.855 00:21:52.855 real 0m4.780s 00:21:52.855 user 0m4.577s 00:21:52.855 sys 0m0.892s 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.855 17:18:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:52.855 ************************************ 00:21:52.855 END TEST default_locks 00:21:52.855 ************************************ 00:21:52.855 17:18:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:21:52.855 17:18:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.855 17:18:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.855 17:18:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:52.855 ************************************ 00:21:52.855 START TEST default_locks_via_rpc 00:21:52.855 ************************************ 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58938 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58938 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.855 17:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.855 [2024-11-26 17:18:22.488694] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:52.855 [2024-11-26 17:18:22.488874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:21:52.855 [2024-11-26 17:18:22.677369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.855 [2024-11-26 17:18:22.828652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58938 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58938 00:21:54.233 17:18:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58938 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58938 ']' 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58938 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58938 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.493 killing process with pid 58938 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58938' 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58938 00:21:54.493 17:18:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58938 00:21:57.782 00:21:57.782 real 0m4.923s 00:21:57.782 user 0m4.757s 00:21:57.782 sys 0m0.922s 00:21:57.782 17:18:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.782 17:18:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:57.782 ************************************ 00:21:57.782 END TEST default_locks_via_rpc 00:21:57.782 ************************************ 00:21:57.782 17:18:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:21:57.782 17:18:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.782 17:18:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.782 17:18:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:57.782 ************************************ 00:21:57.782 START TEST non_locking_app_on_locked_coremask 00:21:57.782 ************************************ 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59024 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59024 /var/tmp/spdk.sock 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59024 ']' 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.782 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.783 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.783 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.783 17:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:57.783 [2024-11-26 17:18:27.486308] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:57.783 [2024-11-26 17:18:27.486697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59024 ] 00:21:57.783 [2024-11-26 17:18:27.673450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.783 [2024-11-26 17:18:27.826100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59040 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59040 /var/tmp/spdk2.sock 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:21:59.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59040 ']' 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.160 17:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:59.160 [2024-11-26 17:18:28.996025] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:21:59.160 [2024-11-26 17:18:28.996166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:21:59.160 [2024-11-26 17:18:29.183615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:59.160 [2024-11-26 17:18:29.183685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.419 [2024-11-26 17:18:29.478801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.953 17:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.953 17:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:01.953 17:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59024 00:22:01.953 17:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59024 00:22:01.953 17:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59024 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59024 ']' 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59024 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59024 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.521 killing process with pid 59024 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59024' 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59024 00:22:02.521 17:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59024 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59040 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59040 ']' 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59040 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59040 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.107 killing process with pid 59040 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59040' 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59040 00:22:09.107 17:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59040 00:22:11.027 ************************************ 00:22:11.027 END TEST non_locking_app_on_locked_coremask 00:22:11.027 ************************************ 00:22:11.027 00:22:11.027 real 0m13.341s 00:22:11.027 user 0m13.514s 00:22:11.027 sys 0m1.662s 00:22:11.028 17:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.028 17:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:11.028 17:18:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:22:11.028 17:18:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.028 17:18:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.028 17:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:11.028 ************************************ 00:22:11.028 START TEST locking_app_on_unlocked_coremask 00:22:11.028 ************************************ 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59210 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59210 /var/tmp/spdk.sock 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59210 ']' 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.028 17:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:11.028 [2024-11-26 17:18:40.915111] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:11.028 [2024-11-26 17:18:40.915951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:22:11.028 [2024-11-26 17:18:41.087063] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:11.028 [2024-11-26 17:18:41.087390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.287 [2024-11-26 17:18:41.247213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59226 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59226 /var/tmp/spdk2.sock 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59226 ']' 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:12.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.223 17:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:12.482 [2024-11-26 17:18:42.389302] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:12.482 [2024-11-26 17:18:42.389683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:22:12.482 [2024-11-26 17:18:42.582641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.052 [2024-11-26 17:18:42.900986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.586 17:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.586 17:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:15.586 17:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59226 00:22:15.586 17:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59226 00:22:15.586 17:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59210 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59210 ']' 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59210 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59210 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.154 killing process with pid 59210 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59210' 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59210 00:22:16.154 17:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59210 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59226 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59226 ']' 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59226 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59226 00:22:21.426 killing process with pid 59226 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59226' 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59226 00:22:21.426 17:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59226 00:22:23.954 00:22:23.954 real 0m13.277s 00:22:23.954 user 0m13.679s 00:22:23.954 sys 0m1.746s 00:22:23.954 17:18:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.213 ************************************ 00:22:24.213 END TEST locking_app_on_unlocked_coremask 00:22:24.213 ************************************ 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 17:18:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:22:24.213 17:18:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:24.213 17:18:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.213 17:18:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 ************************************ 00:22:24.213 START TEST locking_app_on_locked_coremask 00:22:24.213 ************************************ 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59391 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59391 /var/tmp/spdk.sock 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59391 ']' 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.213 17:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 [2024-11-26 17:18:54.276982] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:24.213 [2024-11-26 17:18:54.277368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59391 ] 00:22:24.471 [2024-11-26 17:18:54.465989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.729 [2024-11-26 17:18:54.621413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59412 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59412 /var/tmp/spdk2.sock 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59412 /var/tmp/spdk2.sock 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:22:25.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59412 /var/tmp/spdk2.sock 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59412 ']' 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.757 17:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:25.757 [2024-11-26 17:18:55.844673] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:25.757 [2024-11-26 17:18:55.844827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59412 ] 00:22:26.015 [2024-11-26 17:18:56.041660] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59391 has claimed it. 00:22:26.016 [2024-11-26 17:18:56.041748] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:26.580 ERROR: process (pid: 59412) is no longer running 00:22:26.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59412) - No such process 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59391 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:26.580 17:18:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59391 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59391 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59391 ']' 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59391 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59391 00:22:27.145 killing process with pid 59391 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59391' 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59391 00:22:27.145 17:18:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59391 00:22:30.427 00:22:30.427 real 0m5.788s 00:22:30.427 user 0m5.984s 00:22:30.427 sys 0m1.091s 00:22:30.427 17:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.427 ************************************ 00:22:30.427 END TEST locking_app_on_locked_coremask 00:22:30.427 ************************************ 00:22:30.427 17:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:30.427 17:18:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:22:30.427 17:18:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.427 17:18:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.427 17:18:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:30.427 ************************************ 00:22:30.427 START TEST locking_overlapped_coremask 00:22:30.427 ************************************ 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:22:30.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59487 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59487 /var/tmp/spdk.sock 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59487 ']' 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.427 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.428 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.428 17:18:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:30.428 [2024-11-26 17:19:00.123508] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:30.428 [2024-11-26 17:19:00.123667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59487 ] 00:22:30.428 [2024-11-26 17:19:00.311908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:30.428 [2024-11-26 17:19:00.454841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.428 [2024-11-26 17:19:00.454955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.428 [2024-11-26 17:19:00.454980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59511 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59511 /var/tmp/spdk2.sock 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59511 /var/tmp/spdk2.sock 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:22:31.805 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59511 /var/tmp/spdk2.sock 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59511 ']' 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:31.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.806 17:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:31.806 [2024-11-26 17:19:01.619789] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:31.806 [2024-11-26 17:19:01.620136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:22:31.806 [2024-11-26 17:19:01.807177] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59487 has claimed it. 00:22:31.806 [2024-11-26 17:19:01.807277] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:32.372 ERROR: process (pid: 59511) is no longer running 00:22:32.372 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59511) - No such process 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59487 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59487 ']' 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59487 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59487 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.372 killing process with pid 59487 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59487' 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59487 00:22:32.372 17:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59487 00:22:35.660 00:22:35.660 real 0m5.156s 00:22:35.660 user 0m13.830s 00:22:35.660 sys 0m0.827s 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:35.660 ************************************ 00:22:35.660 END TEST locking_overlapped_coremask 00:22:35.660 ************************************ 00:22:35.660 17:19:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:22:35.660 17:19:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.660 17:19:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.660 17:19:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:35.660 ************************************ 00:22:35.660 START TEST locking_overlapped_coremask_via_rpc 00:22:35.660 ************************************ 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59575 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59575 /var/tmp/spdk.sock 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59575 ']' 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.660 17:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:35.660 [2024-11-26 17:19:05.361385] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:35.660 [2024-11-26 17:19:05.361577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:22:35.660 [2024-11-26 17:19:05.551836] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:35.660 [2024-11-26 17:19:05.551933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.660 [2024-11-26 17:19:05.710384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.660 [2024-11-26 17:19:05.710551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.660 [2024-11-26 17:19:05.710611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59604 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59604 /var/tmp/spdk2.sock 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59604 ']' 00:22:37.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.065 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:37.066 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.066 17:19:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:37.066 [2024-11-26 17:19:06.921481] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:37.066 [2024-11-26 17:19:06.922051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59604 ] 00:22:37.066 [2024-11-26 17:19:07.132610] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:37.066 [2024-11-26 17:19:07.132700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.684 [2024-11-26 17:19:07.442465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.684 [2024-11-26 17:19:07.445611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.684 [2024-11-26 17:19:07.445619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.585 [2024-11-26 17:19:09.661751] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59575 has claimed it. 00:22:39.585 request: 00:22:39.585 { 00:22:39.585 "method": "framework_enable_cpumask_locks", 00:22:39.585 "req_id": 1 00:22:39.585 } 00:22:39.585 Got JSON-RPC error response 00:22:39.585 response: 00:22:39.585 { 00:22:39.585 "code": -32603, 00:22:39.585 "message": "Failed to claim CPU core: 2" 00:22:39.585 } 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59575 /var/tmp/spdk.sock 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59575 ']' 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.585 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.843 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.843 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59604 /var/tmp/spdk2.sock 00:22:39.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59604 ']' 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.844 17:19:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:40.103 00:22:40.103 real 0m4.959s 00:22:40.103 user 0m1.591s 00:22:40.103 sys 0m0.291s 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.103 17:19:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:40.103 ************************************ 00:22:40.103 END TEST locking_overlapped_coremask_via_rpc 00:22:40.103 ************************************ 00:22:40.362 17:19:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:22:40.362 17:19:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59575 ]] 00:22:40.362 17:19:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59575 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59575 ']' 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59575 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59575 00:22:40.362 killing process with pid 59575 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59575' 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59575 00:22:40.362 17:19:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59575 00:22:43.650 17:19:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59604 ]] 00:22:43.650 17:19:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59604 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59604 ']' 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59604 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59604 00:22:43.650 killing process with pid 59604 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59604' 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59604 00:22:43.650 17:19:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59604 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59575 ]] 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59575 00:22:46.186 Process with pid 59575 is not found 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59575 ']' 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59575 00:22:46.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59575) - No such process 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59575 is not found' 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59604 ]] 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59604 00:22:46.186 Process with pid 59604 is not found 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59604 ']' 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59604 00:22:46.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59604) - No such process 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59604 is not found' 00:22:46.186 17:19:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:46.186 ************************************ 00:22:46.186 END TEST cpu_locks 00:22:46.186 ************************************ 00:22:46.186 00:22:46.186 real 0m58.579s 00:22:46.186 user 1m38.342s 00:22:46.186 sys 0m9.001s 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.186 17:19:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:46.186 ************************************ 00:22:46.186 END TEST event 00:22:46.186 ************************************ 00:22:46.186 00:22:46.186 real 1m33.312s 00:22:46.186 user 2m49.423s 00:22:46.186 sys 0m14.226s 00:22:46.186 17:19:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.186 17:19:15 event -- common/autotest_common.sh@10 -- # set +x 00:22:46.186 17:19:15 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:46.186 17:19:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:46.186 17:19:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.186 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:46.186 ************************************ 00:22:46.186 START TEST thread 00:22:46.186 ************************************ 00:22:46.186 17:19:15 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:46.186 * Looking for test storage... 00:22:46.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:22:46.186 17:19:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.186 17:19:16 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.186 17:19:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:46.186 17:19:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:46.186 17:19:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.186 17:19:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.186 17:19:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.186 17:19:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.187 17:19:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.187 17:19:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.187 17:19:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.187 17:19:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.187 17:19:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.187 17:19:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.187 17:19:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.187 17:19:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:22:46.187 17:19:16 thread -- scripts/common.sh@345 -- # : 1 00:22:46.187 17:19:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.187 17:19:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.187 17:19:16 thread -- scripts/common.sh@365 -- # decimal 1 00:22:46.187 17:19:16 thread -- scripts/common.sh@353 -- # local d=1 00:22:46.187 17:19:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.187 17:19:16 thread -- scripts/common.sh@355 -- # echo 1 00:22:46.187 17:19:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.187 17:19:16 thread -- scripts/common.sh@366 -- # decimal 2 00:22:46.187 17:19:16 thread -- scripts/common.sh@353 -- # local d=2 00:22:46.187 17:19:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.187 17:19:16 thread -- scripts/common.sh@355 -- # echo 2 00:22:46.187 17:19:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.187 17:19:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.187 17:19:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.187 17:19:16 thread -- scripts/common.sh@368 -- # return 0 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.187 --rc genhtml_branch_coverage=1 00:22:46.187 --rc genhtml_function_coverage=1 00:22:46.187 --rc genhtml_legend=1 00:22:46.187 --rc geninfo_all_blocks=1 00:22:46.187 --rc geninfo_unexecuted_blocks=1 00:22:46.187 00:22:46.187 ' 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.187 --rc genhtml_branch_coverage=1 00:22:46.187 --rc genhtml_function_coverage=1 00:22:46.187 --rc genhtml_legend=1 00:22:46.187 --rc geninfo_all_blocks=1 00:22:46.187 --rc geninfo_unexecuted_blocks=1 00:22:46.187 00:22:46.187 ' 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.187 --rc genhtml_branch_coverage=1 00:22:46.187 --rc genhtml_function_coverage=1 00:22:46.187 --rc genhtml_legend=1 00:22:46.187 --rc geninfo_all_blocks=1 00:22:46.187 --rc geninfo_unexecuted_blocks=1 00:22:46.187 00:22:46.187 ' 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.187 --rc genhtml_branch_coverage=1 00:22:46.187 --rc genhtml_function_coverage=1 00:22:46.187 --rc genhtml_legend=1 00:22:46.187 --rc geninfo_all_blocks=1 00:22:46.187 --rc geninfo_unexecuted_blocks=1 00:22:46.187 00:22:46.187 ' 00:22:46.187 17:19:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.187 17:19:16 thread -- common/autotest_common.sh@10 -- # set +x 00:22:46.187 ************************************ 00:22:46.187 START TEST thread_poller_perf 00:22:46.187 ************************************ 00:22:46.187 17:19:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:46.187 [2024-11-26 17:19:16.274529] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:46.187 [2024-11-26 17:19:16.274670] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59811 ] 00:22:46.446 [2024-11-26 17:19:16.459368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.706 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:22:46.706 [2024-11-26 17:19:16.604953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.086 [2024-11-26T17:19:18.200Z] ====================================== 00:22:48.086 [2024-11-26T17:19:18.200Z] busy:2500414924 (cyc) 00:22:48.086 [2024-11-26T17:19:18.200Z] total_run_count: 387000 00:22:48.086 [2024-11-26T17:19:18.200Z] tsc_hz: 2490000000 (cyc) 00:22:48.086 [2024-11-26T17:19:18.200Z] ====================================== 00:22:48.086 [2024-11-26T17:19:18.200Z] poller_cost: 6461 (cyc), 2594 (nsec) 00:22:48.086 00:22:48.086 real 0m1.642s 00:22:48.086 user 0m1.412s 00:22:48.086 sys 0m0.123s 00:22:48.086 17:19:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.086 17:19:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.086 ************************************ 00:22:48.086 END TEST thread_poller_perf 00:22:48.086 ************************************ 00:22:48.086 17:19:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:48.086 17:19:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:22:48.086 17:19:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.086 17:19:17 thread -- common/autotest_common.sh@10 -- # set +x 00:22:48.086 ************************************ 00:22:48.086 START TEST thread_poller_perf 00:22:48.086 ************************************ 00:22:48.086 17:19:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:48.086 [2024-11-26 17:19:17.988429] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:48.086 [2024-11-26 17:19:17.988578] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:22:48.086 [2024-11-26 17:19:18.171105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.345 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:22:48.345 [2024-11-26 17:19:18.316038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.727 [2024-11-26T17:19:19.841Z] ====================================== 00:22:49.727 [2024-11-26T17:19:19.841Z] busy:2494198808 (cyc) 00:22:49.727 [2024-11-26T17:19:19.841Z] total_run_count: 4902000 00:22:49.727 [2024-11-26T17:19:19.841Z] tsc_hz: 2490000000 (cyc) 00:22:49.727 [2024-11-26T17:19:19.841Z] ====================================== 00:22:49.727 [2024-11-26T17:19:19.841Z] poller_cost: 508 (cyc), 204 (nsec) 00:22:49.727 00:22:49.727 real 0m1.620s 00:22:49.727 user 0m1.402s 00:22:49.727 sys 0m0.110s 00:22:49.727 17:19:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.727 17:19:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:49.727 ************************************ 00:22:49.727 END TEST thread_poller_perf 00:22:49.727 ************************************ 00:22:49.727 17:19:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:22:49.727 00:22:49.727 real 0m3.633s 00:22:49.727 user 0m2.975s 00:22:49.727 sys 0m0.456s 00:22:49.727 17:19:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.727 17:19:19 thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.727 ************************************ 00:22:49.727 END TEST thread 00:22:49.727 ************************************ 00:22:49.727 17:19:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:22:49.727 17:19:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:49.727 17:19:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.727 17:19:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.727 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.727 ************************************ 00:22:49.727 START TEST app_cmdline 00:22:49.727 ************************************ 00:22:49.727 17:19:19 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:49.727 * Looking for test storage... 00:22:49.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:49.728 17:19:19 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.728 17:19:19 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.728 17:19:19 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.987 17:19:19 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:22:49.987 17:19:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.988 17:19:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:22:49.988 17:19:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.988 17:19:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.988 17:19:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.988 17:19:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.988 --rc genhtml_branch_coverage=1 00:22:49.988 --rc genhtml_function_coverage=1 00:22:49.988 --rc genhtml_legend=1 00:22:49.988 --rc geninfo_all_blocks=1 00:22:49.988 --rc geninfo_unexecuted_blocks=1 00:22:49.988 00:22:49.988 ' 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.988 --rc genhtml_branch_coverage=1 00:22:49.988 --rc genhtml_function_coverage=1 00:22:49.988 --rc genhtml_legend=1 00:22:49.988 --rc geninfo_all_blocks=1 00:22:49.988 --rc geninfo_unexecuted_blocks=1 00:22:49.988 00:22:49.988 ' 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.988 --rc genhtml_branch_coverage=1 00:22:49.988 --rc genhtml_function_coverage=1 00:22:49.988 --rc genhtml_legend=1 00:22:49.988 --rc geninfo_all_blocks=1 00:22:49.988 --rc geninfo_unexecuted_blocks=1 00:22:49.988 00:22:49.988 ' 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.988 --rc genhtml_branch_coverage=1 00:22:49.988 --rc genhtml_function_coverage=1 00:22:49.988 --rc genhtml_legend=1 00:22:49.988 --rc geninfo_all_blocks=1 00:22:49.988 --rc geninfo_unexecuted_blocks=1 00:22:49.988 00:22:49.988 ' 00:22:49.988 17:19:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:22:49.988 17:19:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59940 00:22:49.988 17:19:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:22:49.988 17:19:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59940 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59940 ']' 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.988 17:19:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:49.988 [2024-11-26 17:19:20.028623] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:49.988 [2024-11-26 17:19:20.028763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:22:50.260 [2024-11-26 17:19:20.196997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.260 [2024-11-26 17:19:20.344087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.199 17:19:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.199 17:19:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:22:51.199 17:19:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:22:51.459 { 00:22:51.459 "version": "SPDK v25.01-pre git sha1 ff173863b", 00:22:51.459 "fields": { 00:22:51.459 "major": 25, 00:22:51.459 "minor": 1, 00:22:51.459 "patch": 0, 00:22:51.459 "suffix": "-pre", 00:22:51.459 "commit": "ff173863b" 00:22:51.459 } 00:22:51.459 } 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:22:51.459 17:19:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.459 17:19:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:22:51.459 17:19:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.459 17:19:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:22:51.460 17:19:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:22:51.460 17:19:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:51.460 17:19:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:51.718 request: 00:22:51.718 { 00:22:51.718 "method": "env_dpdk_get_mem_stats", 00:22:51.718 "req_id": 1 00:22:51.718 } 00:22:51.718 Got JSON-RPC error response 00:22:51.718 response: 00:22:51.718 { 00:22:51.718 "code": -32601, 00:22:51.718 "message": "Method not found" 00:22:51.718 } 00:22:51.718 17:19:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:22:51.718 17:19:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.718 17:19:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.718 17:19:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.718 17:19:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59940 00:22:51.719 17:19:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59940 ']' 00:22:51.719 17:19:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59940 00:22:51.719 17:19:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:22:51.719 17:19:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.719 17:19:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59940 00:22:51.977 17:19:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.977 17:19:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.977 killing process with pid 59940 00:22:51.977 17:19:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59940' 00:22:51.977 17:19:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 59940 00:22:51.977 17:19:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 59940 00:22:54.520 00:22:54.520 real 0m4.672s 00:22:54.520 user 0m4.851s 00:22:54.520 sys 0m0.730s 00:22:54.520 17:19:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.520 17:19:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:54.520 ************************************ 00:22:54.520 END TEST app_cmdline 00:22:54.520 ************************************ 00:22:54.520 17:19:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:54.520 17:19:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.520 17:19:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.520 17:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:54.520 ************************************ 00:22:54.520 START TEST version 00:22:54.520 ************************************ 00:22:54.520 17:19:24 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:54.520 * Looking for test storage... 00:22:54.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:54.520 17:19:24 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:54.520 17:19:24 version -- common/autotest_common.sh@1693 -- # lcov --version 00:22:54.520 17:19:24 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:54.520 17:19:24 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:54.520 17:19:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.520 17:19:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.520 17:19:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.520 17:19:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.520 17:19:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.785 17:19:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.785 17:19:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.785 17:19:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.785 17:19:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.786 17:19:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.786 17:19:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.786 17:19:24 version -- scripts/common.sh@344 -- # case "$op" in 00:22:54.786 17:19:24 version -- scripts/common.sh@345 -- # : 1 00:22:54.786 17:19:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.786 17:19:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.786 17:19:24 version -- scripts/common.sh@365 -- # decimal 1 00:22:54.786 17:19:24 version -- scripts/common.sh@353 -- # local d=1 00:22:54.786 17:19:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.786 17:19:24 version -- scripts/common.sh@355 -- # echo 1 00:22:54.786 17:19:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.786 17:19:24 version -- scripts/common.sh@366 -- # decimal 2 00:22:54.786 17:19:24 version -- scripts/common.sh@353 -- # local d=2 00:22:54.786 17:19:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.786 17:19:24 version -- scripts/common.sh@355 -- # echo 2 00:22:54.786 17:19:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.786 17:19:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.786 17:19:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.786 17:19:24 version -- scripts/common.sh@368 -- # return 0 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.786 --rc genhtml_branch_coverage=1 00:22:54.786 --rc genhtml_function_coverage=1 00:22:54.786 --rc genhtml_legend=1 00:22:54.786 --rc geninfo_all_blocks=1 00:22:54.786 --rc geninfo_unexecuted_blocks=1 00:22:54.786 00:22:54.786 ' 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.786 --rc genhtml_branch_coverage=1 00:22:54.786 --rc genhtml_function_coverage=1 00:22:54.786 --rc genhtml_legend=1 00:22:54.786 --rc geninfo_all_blocks=1 00:22:54.786 --rc geninfo_unexecuted_blocks=1 00:22:54.786 00:22:54.786 ' 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.786 --rc genhtml_branch_coverage=1 00:22:54.786 --rc genhtml_function_coverage=1 00:22:54.786 --rc genhtml_legend=1 00:22:54.786 --rc geninfo_all_blocks=1 00:22:54.786 --rc geninfo_unexecuted_blocks=1 00:22:54.786 00:22:54.786 ' 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.786 --rc genhtml_branch_coverage=1 00:22:54.786 --rc genhtml_function_coverage=1 00:22:54.786 --rc genhtml_legend=1 00:22:54.786 --rc geninfo_all_blocks=1 00:22:54.786 --rc geninfo_unexecuted_blocks=1 00:22:54.786 00:22:54.786 ' 00:22:54.786 17:19:24 version -- app/version.sh@17 -- # get_header_version major 00:22:54.786 17:19:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # cut -f2 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # tr -d '"' 00:22:54.786 17:19:24 version -- app/version.sh@17 -- # major=25 00:22:54.786 17:19:24 version -- app/version.sh@18 -- # get_header_version minor 00:22:54.786 17:19:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # cut -f2 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # tr -d '"' 00:22:54.786 17:19:24 version -- app/version.sh@18 -- # minor=1 00:22:54.786 17:19:24 version -- app/version.sh@19 -- # get_header_version patch 00:22:54.786 17:19:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # cut -f2 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # tr -d '"' 00:22:54.786 17:19:24 version -- app/version.sh@19 -- # patch=0 00:22:54.786 17:19:24 version -- app/version.sh@20 -- # get_header_version suffix 00:22:54.786 17:19:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # cut -f2 00:22:54.786 17:19:24 version -- app/version.sh@14 -- # tr -d '"' 00:22:54.786 17:19:24 version -- app/version.sh@20 -- # suffix=-pre 00:22:54.786 17:19:24 version -- app/version.sh@22 -- # version=25.1 00:22:54.786 17:19:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:22:54.786 17:19:24 version -- app/version.sh@28 -- # version=25.1rc0 00:22:54.786 17:19:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:54.786 17:19:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:22:54.786 17:19:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:22:54.786 17:19:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:22:54.786 00:22:54.786 real 0m0.323s 00:22:54.786 user 0m0.195s 00:22:54.786 sys 0m0.189s 00:22:54.786 17:19:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.786 ************************************ 00:22:54.786 END TEST version 00:22:54.786 ************************************ 00:22:54.786 17:19:24 version -- common/autotest_common.sh@10 -- # set +x 00:22:54.786 17:19:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:22:54.786 17:19:24 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:22:54.786 17:19:24 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:22:54.786 17:19:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.786 17:19:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.786 17:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:54.786 ************************************ 00:22:54.786 START TEST bdev_raid 00:22:54.786 ************************************ 00:22:54.786 17:19:24 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:22:55.045 * Looking for test storage... 00:22:55.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:55.045 17:19:24 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.045 17:19:24 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.045 17:19:24 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@345 -- # : 1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.046 17:19:25 bdev_raid -- scripts/common.sh@368 -- # return 0 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.046 --rc genhtml_branch_coverage=1 00:22:55.046 --rc genhtml_function_coverage=1 00:22:55.046 --rc genhtml_legend=1 00:22:55.046 --rc geninfo_all_blocks=1 00:22:55.046 --rc geninfo_unexecuted_blocks=1 00:22:55.046 00:22:55.046 ' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.046 --rc genhtml_branch_coverage=1 00:22:55.046 --rc genhtml_function_coverage=1 00:22:55.046 --rc genhtml_legend=1 00:22:55.046 --rc geninfo_all_blocks=1 00:22:55.046 --rc geninfo_unexecuted_blocks=1 00:22:55.046 00:22:55.046 ' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.046 --rc genhtml_branch_coverage=1 00:22:55.046 --rc genhtml_function_coverage=1 00:22:55.046 --rc genhtml_legend=1 00:22:55.046 --rc geninfo_all_blocks=1 00:22:55.046 --rc geninfo_unexecuted_blocks=1 00:22:55.046 00:22:55.046 ' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.046 --rc genhtml_branch_coverage=1 00:22:55.046 --rc genhtml_function_coverage=1 00:22:55.046 --rc genhtml_legend=1 00:22:55.046 --rc geninfo_all_blocks=1 00:22:55.046 --rc geninfo_unexecuted_blocks=1 00:22:55.046 00:22:55.046 ' 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:55.046 17:19:25 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:22:55.046 17:19:25 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.046 17:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.046 ************************************ 00:22:55.046 START TEST raid1_resize_data_offset_test 00:22:55.046 ************************************ 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60133 00:22:55.046 Process raid pid: 60133 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60133' 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60133 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60133 ']' 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.046 17:19:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.305 [2024-11-26 17:19:25.180446] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:22:55.305 [2024-11-26 17:19:25.180612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.305 [2024-11-26 17:19:25.353225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.563 [2024-11-26 17:19:25.504851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.822 [2024-11-26 17:19:25.745811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.822 [2024-11-26 17:19:25.745871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.081 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.082 malloc0 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.082 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 malloc1 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 null0 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 [2024-11-26 17:19:26.247154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:22:56.341 [2024-11-26 17:19:26.249388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:56.341 [2024-11-26 17:19:26.249463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:22:56.341 [2024-11-26 17:19:26.249652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:56.341 [2024-11-26 17:19:26.249671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:22:56.341 [2024-11-26 17:19:26.249983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:56.341 [2024-11-26 17:19:26.250153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:56.341 [2024-11-26 17:19:26.250167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:22:56.341 [2024-11-26 17:19:26.250328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 [2024-11-26 17:19:26.307127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.341 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.908 malloc2 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.908 [2024-11-26 17:19:26.904590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:56.908 [2024-11-26 17:19:26.924789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.908 [2024-11-26 17:19:26.927176] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60133 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60133 ']' 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60133 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.908 17:19:26 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60133 00:22:56.908 17:19:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.908 17:19:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.908 killing process with pid 60133 00:22:56.908 17:19:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60133' 00:22:56.908 17:19:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60133 00:22:56.908 17:19:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60133 00:22:56.908 [2024-11-26 17:19:27.019279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.167 [2024-11-26 17:19:27.020995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:22:57.167 [2024-11-26 17:19:27.021069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.167 [2024-11-26 17:19:27.021089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:22:57.167 [2024-11-26 17:19:27.060725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.167 [2024-11-26 17:19:27.061114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.167 [2024-11-26 17:19:27.061138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:22:59.088 [2024-11-26 17:19:28.927683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:00.042 17:19:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:23:00.042 00:23:00.042 real 0m5.074s 00:23:00.042 user 0m4.886s 00:23:00.042 sys 0m0.696s 00:23:00.042 17:19:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.042 17:19:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.042 ************************************ 00:23:00.042 END TEST raid1_resize_data_offset_test 00:23:00.042 ************************************ 00:23:00.300 17:19:30 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:23:00.300 17:19:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.300 17:19:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.300 17:19:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:00.300 ************************************ 00:23:00.300 START TEST raid0_resize_superblock_test 00:23:00.300 ************************************ 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60222 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:00.300 Process raid pid: 60222 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60222' 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60222 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.300 17:19:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.300 [2024-11-26 17:19:30.329410] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:00.300 [2024-11-26 17:19:30.329601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.558 [2024-11-26 17:19:30.515126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.558 [2024-11-26 17:19:30.667612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.816 [2024-11-26 17:19:30.879440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.816 [2024-11-26 17:19:30.879497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:01.381 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.381 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:01.381 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:23:01.381 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.381 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 malloc0 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 [2024-11-26 17:19:31.825273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:01.946 [2024-11-26 17:19:31.825350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.946 [2024-11-26 17:19:31.825377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:01.946 [2024-11-26 17:19:31.825393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.946 [2024-11-26 17:19:31.828003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.946 [2024-11-26 17:19:31.828048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:01.946 pt0 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 9f11ab37-35c1-41e6-92fc-e46abeb8ff43 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 a4d5b396-63a1-43d8-879e-1394e0dfeab5 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 28ac0c97-b797-453a-9e5f-f8dd99030e0b 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.946 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 [2024-11-26 17:19:31.979356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a4d5b396-63a1-43d8-879e-1394e0dfeab5 is claimed 00:23:01.946 [2024-11-26 17:19:31.979478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28ac0c97-b797-453a-9e5f-f8dd99030e0b is claimed 00:23:01.946 [2024-11-26 17:19:31.979628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:01.946 [2024-11-26 17:19:31.979649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:23:01.946 [2024-11-26 17:19:31.979981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:01.947 [2024-11-26 17:19:31.980188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:01.947 [2024-11-26 17:19:31.980208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:01.947 [2024-11-26 17:19:31.980393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.947 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.947 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:23:01.947 17:19:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:01.947 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.947 17:19:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.947 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:23:02.206 [2024-11-26 17:19:32.083492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.206 [2024-11-26 17:19:32.119481] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:02.206 [2024-11-26 17:19:32.119536] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a4d5b396-63a1-43d8-879e-1394e0dfeab5' was resized: old size 131072, new size 204800 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.206 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.206 [2024-11-26 17:19:32.131385] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:02.206 [2024-11-26 17:19:32.131422] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '28ac0c97-b797-453a-9e5f-f8dd99030e0b' was resized: old size 131072, new size 204800 00:23:02.206 [2024-11-26 17:19:32.131461] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:23:02.207 [2024-11-26 17:19:32.219285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 [2024-11-26 17:19:32.263044] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:23:02.207 [2024-11-26 17:19:32.263153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:23:02.207 [2024-11-26 17:19:32.263173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.207 [2024-11-26 17:19:32.263191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:23:02.207 [2024-11-26 17:19:32.263375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.207 [2024-11-26 17:19:32.263424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.207 [2024-11-26 17:19:32.263445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 [2024-11-26 17:19:32.274905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:02.207 [2024-11-26 17:19:32.274995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.207 [2024-11-26 17:19:32.275023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:02.207 [2024-11-26 17:19:32.275038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.207 [2024-11-26 17:19:32.277893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.207 [2024-11-26 17:19:32.277945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:02.207 pt0 00:23:02.207 [2024-11-26 17:19:32.279923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a4d5b396-63a1-43d8-879e-1394e0dfeab5 00:23:02.207 [2024-11-26 17:19:32.280008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a4d5b396-63a1-43d8-879e-1394e0dfeab5 is claimed 00:23:02.207 [2024-11-26 17:19:32.280134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 28ac0c97-b797-453a-9e5f-f8dd99030e0b 00:23:02.207 [2024-11-26 17:19:32.280164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28ac0c97-b797-453a-9e5f-f8dd99030e0b is claimed 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 [2024-11-26 17:19:32.280313] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 28ac0c97-b797-453a-9e5f-f8dd99030e0b (2) smaller than existing raid bdev Raid (3) 00:23:02.207 [2024-11-26 17:19:32.280351] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a4d5b396-63a1-43d8-879e-1394e0dfeab5: File exists 00:23:02.207 [2024-11-26 17:19:32.280396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:02.207 [2024-11-26 17:19:32.280412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:02.207 [2024-11-26 17:19:32.280735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 [2024-11-26 17:19:32.280904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:02.207 [2024-11-26 17:19:32.280918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 [2024-11-26 17:19:32.281100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.207 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.207 [2024-11-26 17:19:32.307161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60222 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60222 ']' 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60222 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60222 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.468 killing process with pid 60222 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60222' 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60222 00:23:02.468 [2024-11-26 17:19:32.392850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:02.468 17:19:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60222 00:23:02.468 [2024-11-26 17:19:32.393019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.468 [2024-11-26 17:19:32.393077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.468 [2024-11-26 17:19:32.393089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:23:03.846 [2024-11-26 17:19:33.918702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:05.223 17:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:23:05.223 00:23:05.223 real 0m4.912s 00:23:05.223 user 0m5.074s 00:23:05.223 sys 0m0.704s 00:23:05.223 17:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.223 ************************************ 00:23:05.223 17:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.223 END TEST raid0_resize_superblock_test 00:23:05.223 ************************************ 00:23:05.223 17:19:35 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:23:05.223 17:19:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.223 17:19:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.223 17:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:05.223 ************************************ 00:23:05.223 START TEST raid1_resize_superblock_test 00:23:05.223 ************************************ 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60320 00:23:05.223 Process raid pid: 60320 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60320' 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60320 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60320 ']' 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.223 17:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.223 [2024-11-26 17:19:35.312664] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:05.223 [2024-11-26 17:19:35.312809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.481 [2024-11-26 17:19:35.497241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.740 [2024-11-26 17:19:35.642765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.003 [2024-11-26 17:19:35.882869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.003 [2024-11-26 17:19:35.882929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.266 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.266 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:06.266 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:23:06.266 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.266 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.833 malloc0 00:23:06.833 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.833 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:06.833 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 [2024-11-26 17:19:36.799446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:06.834 [2024-11-26 17:19:36.799539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.834 [2024-11-26 17:19:36.799571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:06.834 [2024-11-26 17:19:36.799587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.834 [2024-11-26 17:19:36.802407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.834 [2024-11-26 17:19:36.802450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:06.834 pt0 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 c79408b9-8eb7-40d4-8c43-12de68a02daf 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 6bf187d9-a5e8-4514-8b54-a66a8cd5198f 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.093 e9f3ba8d-e044-4c79-ad9c-1b23bc556750 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.093 [2024-11-26 17:19:36.955660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6bf187d9-a5e8-4514-8b54-a66a8cd5198f is claimed 00:23:07.093 [2024-11-26 17:19:36.955821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e9f3ba8d-e044-4c79-ad9c-1b23bc556750 is claimed 00:23:07.093 [2024-11-26 17:19:36.955983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:07.093 [2024-11-26 17:19:36.956003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:23:07.093 [2024-11-26 17:19:36.956362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:07.093 [2024-11-26 17:19:36.956627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:07.093 [2024-11-26 17:19:36.956649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:07.093 [2024-11-26 17:19:36.956859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.093 17:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:07.093 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 [2024-11-26 17:19:37.059735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 [2024-11-26 17:19:37.099664] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:07.094 [2024-11-26 17:19:37.099703] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6bf187d9-a5e8-4514-8b54-a66a8cd5198f' was resized: old size 131072, new size 204800 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 [2024-11-26 17:19:37.107569] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:07.094 [2024-11-26 17:19:37.107603] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e9f3ba8d-e044-4c79-ad9c-1b23bc556750' was resized: old size 131072, new size 204800 00:23:07.094 [2024-11-26 17:19:37.107631] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:07.094 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:23:07.094 [2024-11-26 17:19:37.195500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.354 [2024-11-26 17:19:37.255229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:23:07.354 [2024-11-26 17:19:37.255318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:23:07.354 [2024-11-26 17:19:37.255352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:23:07.354 [2024-11-26 17:19:37.255542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.354 [2024-11-26 17:19:37.255764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.354 [2024-11-26 17:19:37.255828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.354 [2024-11-26 17:19:37.255845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.354 [2024-11-26 17:19:37.263091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:07.354 [2024-11-26 17:19:37.263158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.354 [2024-11-26 17:19:37.263183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:07.354 [2024-11-26 17:19:37.263200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.354 [2024-11-26 17:19:37.265984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.354 [2024-11-26 17:19:37.266025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:07.354 [2024-11-26 17:19:37.267843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6bf187d9-a5e8-4514-8b54-a66a8cd5198f 00:23:07.354 [2024-11-26 17:19:37.267946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6bf187d9-a5e8-4514-8b54-a66a8cd5198f is claimed 00:23:07.354 [2024-11-26 17:19:37.268055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e9f3ba8d-e044-4c79-ad9c-1b23bc556750 00:23:07.354 [2024-11-26 17:19:37.268083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e9f3ba8d-e044-4c79-ad9c-1b23bc556750 is claimed 00:23:07.354 [2024-11-26 17:19:37.268247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e9f3ba8d-e044-4c79-ad9c-1b23bc556750 (2) smaller than existing raid bdev Raid (3) 00:23:07.354 pt0 00:23:07.354 [2024-11-26 17:19:37.268280] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 6bf187d9-a5e8-4514-8b54-a66a8cd5198f: File exists 00:23:07.354 [2024-11-26 17:19:37.268316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:07.354 [2024-11-26 17:19:37.268349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:07.354 [2024-11-26 17:19:37.268648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:07.354 [2024-11-26 17:19:37.268824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:07.354 [2024-11-26 17:19:37.268835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.354 [2024-11-26 17:19:37.269002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.354 [2024-11-26 17:19:37.288213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60320 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60320 ']' 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60320 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60320 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.354 killing process with pid 60320 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60320' 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60320 00:23:07.354 [2024-11-26 17:19:37.371190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:07.354 17:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60320 00:23:07.354 [2024-11-26 17:19:37.371319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.354 [2024-11-26 17:19:37.371389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.354 [2024-11-26 17:19:37.371401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:23:09.282 [2024-11-26 17:19:38.872221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:10.229 17:19:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:23:10.229 00:23:10.229 real 0m4.880s 00:23:10.229 user 0m4.996s 00:23:10.229 sys 0m0.746s 00:23:10.229 17:19:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.229 17:19:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.229 ************************************ 00:23:10.229 END TEST raid1_resize_superblock_test 00:23:10.229 ************************************ 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:23:10.229 17:19:40 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:23:10.229 17:19:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:10.229 17:19:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.229 17:19:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:10.229 ************************************ 00:23:10.229 START TEST raid_function_test_raid0 00:23:10.229 ************************************ 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60423 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:10.229 Process raid pid: 60423 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60423' 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60423 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60423 ']' 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.229 17:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:10.229 [2024-11-26 17:19:40.324960] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:10.229 [2024-11-26 17:19:40.325214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.498 [2024-11-26 17:19:40.526450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.756 [2024-11-26 17:19:40.679425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.014 [2024-11-26 17:19:40.931018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.014 [2024-11-26 17:19:40.931064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.271 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.271 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:23:11.271 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:11.272 Base_1 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:11.272 Base_2 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:11.272 [2024-11-26 17:19:41.277559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:11.272 [2024-11-26 17:19:41.279962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:11.272 [2024-11-26 17:19:41.280045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:11.272 [2024-11-26 17:19:41.280061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:11.272 [2024-11-26 17:19:41.280371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:11.272 [2024-11-26 17:19:41.280568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:11.272 [2024-11-26 17:19:41.280581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:23:11.272 [2024-11-26 17:19:41.280765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.272 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:23:11.529 [2024-11-26 17:19:41.557253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:11.529 /dev/nbd0 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.529 1+0 records in 00:23:11.529 1+0 records out 00:23:11.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439251 s, 9.3 MB/s 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:11.529 17:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:23:11.530 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.530 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.530 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:23:11.530 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:11.530 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:11.787 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:11.787 { 00:23:11.787 "nbd_device": "/dev/nbd0", 00:23:11.787 "bdev_name": "raid" 00:23:11.787 } 00:23:11.787 ]' 00:23:11.787 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:11.787 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:11.787 { 00:23:11.787 "nbd_device": "/dev/nbd0", 00:23:11.787 "bdev_name": "raid" 00:23:11.787 } 00:23:11.787 ]' 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:23:12.044 17:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:23:12.044 4096+0 records in 00:23:12.044 4096+0 records out 00:23:12.044 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.037423 s, 56.0 MB/s 00:23:12.044 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:23:12.301 4096+0 records in 00:23:12.301 4096+0 records out 00:23:12.301 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.240115 s, 8.7 MB/s 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:23:12.301 128+0 records in 00:23:12.301 128+0 records out 00:23:12.301 65536 bytes (66 kB, 64 KiB) copied, 0.00182526 s, 35.9 MB/s 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:23:12.301 2035+0 records in 00:23:12.301 2035+0 records out 00:23:12.301 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0201986 s, 51.6 MB/s 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:23:12.301 456+0 records in 00:23:12.301 456+0 records out 00:23:12.301 233472 bytes (233 kB, 228 KiB) copied, 0.00517452 s, 45.1 MB/s 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:23:12.301 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.302 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:12.868 [2024-11-26 17:19:42.733305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:12.868 17:19:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:13.126 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:13.126 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60423 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60423 ']' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60423 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60423 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.127 killing process with pid 60423 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60423' 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60423 00:23:13.127 [2024-11-26 17:19:43.128842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.127 17:19:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60423 00:23:13.127 [2024-11-26 17:19:43.128972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.127 [2024-11-26 17:19:43.129034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.127 [2024-11-26 17:19:43.129056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:23:13.385 [2024-11-26 17:19:43.362336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.759 17:19:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:23:14.759 00:23:14.759 real 0m4.472s 00:23:14.759 user 0m5.162s 00:23:14.759 sys 0m1.252s 00:23:14.759 17:19:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.759 ************************************ 00:23:14.759 END TEST raid_function_test_raid0 00:23:14.759 17:19:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:14.759 ************************************ 00:23:14.759 17:19:44 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:23:14.759 17:19:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.759 17:19:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.759 17:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.759 ************************************ 00:23:14.759 START TEST raid_function_test_concat 00:23:14.759 ************************************ 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60558 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60558' 00:23:14.760 Process raid pid: 60558 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60558 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60558 ']' 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.760 17:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:14.760 [2024-11-26 17:19:44.844699] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:14.760 [2024-11-26 17:19:44.844861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.017 [2024-11-26 17:19:45.034842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.274 [2024-11-26 17:19:45.187559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.533 [2024-11-26 17:19:45.434604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.533 [2024-11-26 17:19:45.434683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:15.791 Base_1 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:15.791 Base_2 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:15.791 [2024-11-26 17:19:45.843070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:15.791 [2024-11-26 17:19:45.845480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:15.791 [2024-11-26 17:19:45.845577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:15.791 [2024-11-26 17:19:45.845594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:15.791 [2024-11-26 17:19:45.845908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:15.791 [2024-11-26 17:19:45.846095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:15.791 [2024-11-26 17:19:45.846113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:23:15.791 [2024-11-26 17:19:45.846294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:15.791 17:19:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:23:16.050 [2024-11-26 17:19:46.130830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:16.050 /dev/nbd0 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.309 1+0 records in 00:23:16.309 1+0 records out 00:23:16.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483684 s, 8.5 MB/s 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:16.309 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:16.568 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:16.568 { 00:23:16.568 "nbd_device": "/dev/nbd0", 00:23:16.568 "bdev_name": "raid" 00:23:16.568 } 00:23:16.568 ]' 00:23:16.568 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:16.568 { 00:23:16.569 "nbd_device": "/dev/nbd0", 00:23:16.569 "bdev_name": "raid" 00:23:16.569 } 00:23:16.569 ]' 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:23:16.569 4096+0 records in 00:23:16.569 4096+0 records out 00:23:16.569 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0370457 s, 56.6 MB/s 00:23:16.569 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:23:16.828 4096+0 records in 00:23:16.828 4096+0 records out 00:23:16.828 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.325212 s, 6.4 MB/s 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:23:16.828 128+0 records in 00:23:16.828 128+0 records out 00:23:16.828 65536 bytes (66 kB, 64 KiB) copied, 0.000836024 s, 78.4 MB/s 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:23:16.828 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:23:17.086 2035+0 records in 00:23:17.086 2035+0 records out 00:23:17.086 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0170286 s, 61.2 MB/s 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:23:17.086 456+0 records in 00:23:17.086 456+0 records out 00:23:17.086 233472 bytes (233 kB, 228 KiB) copied, 0.00238923 s, 97.7 MB/s 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:17.086 17:19:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.086 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:17.345 [2024-11-26 17:19:47.295647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:17.345 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:17.603 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:17.603 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60558 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60558 ']' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60558 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60558 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.604 killing process with pid 60558 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60558' 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60558 00:23:17.604 [2024-11-26 17:19:47.654261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.604 17:19:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60558 00:23:17.604 [2024-11-26 17:19:47.654390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.604 [2024-11-26 17:19:47.654462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.604 [2024-11-26 17:19:47.654478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:23:17.863 [2024-11-26 17:19:47.885295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:19.238 ************************************ 00:23:19.238 END TEST raid_function_test_concat 00:23:19.238 ************************************ 00:23:19.238 17:19:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:23:19.238 00:23:19.238 real 0m4.434s 00:23:19.238 user 0m5.009s 00:23:19.238 sys 0m1.246s 00:23:19.238 17:19:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.238 17:19:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 17:19:49 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:23:19.238 17:19:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:19.238 17:19:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.238 17:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 ************************************ 00:23:19.238 START TEST raid0_resize_test 00:23:19.238 ************************************ 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60692 00:23:19.239 Process raid pid: 60692 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60692' 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60692 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60692 ']' 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.239 17:19:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.497 [2024-11-26 17:19:49.352654] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:19.497 [2024-11-26 17:19:49.352810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.497 [2024-11-26 17:19:49.544652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.756 [2024-11-26 17:19:49.697210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.014 [2024-11-26 17:19:49.948258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.014 [2024-11-26 17:19:49.948323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 Base_1 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 Base_2 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 [2024-11-26 17:19:50.256230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:20.274 [2024-11-26 17:19:50.258703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:20.274 [2024-11-26 17:19:50.258780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:20.274 [2024-11-26 17:19:50.258796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:20.274 [2024-11-26 17:19:50.259138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:20.274 [2024-11-26 17:19:50.259293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:20.274 [2024-11-26 17:19:50.259305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:20.274 [2024-11-26 17:19:50.259511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 [2024-11-26 17:19:50.268199] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:20.274 [2024-11-26 17:19:50.268239] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:23:20.274 true 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:23:20.274 [2024-11-26 17:19:50.280352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 [2024-11-26 17:19:50.328153] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:20.274 [2024-11-26 17:19:50.328198] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:23:20.274 [2024-11-26 17:19:50.328237] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:23:20.274 true 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.274 [2024-11-26 17:19:50.344345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60692 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60692 ']' 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60692 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:23:20.274 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.275 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60692 00:23:20.534 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.534 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.534 killing process with pid 60692 00:23:20.534 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60692' 00:23:20.534 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60692 00:23:20.534 [2024-11-26 17:19:50.413198] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.534 17:19:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60692 00:23:20.534 [2024-11-26 17:19:50.413323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.534 [2024-11-26 17:19:50.413385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.534 [2024-11-26 17:19:50.413398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:20.534 [2024-11-26 17:19:50.431636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.913 17:19:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:23:21.913 ************************************ 00:23:21.913 END TEST raid0_resize_test 00:23:21.913 ************************************ 00:23:21.913 00:23:21.913 real 0m2.394s 00:23:21.913 user 0m2.471s 00:23:21.913 sys 0m0.449s 00:23:21.913 17:19:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.913 17:19:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.913 17:19:51 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:23:21.913 17:19:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.913 17:19:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.913 17:19:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.913 ************************************ 00:23:21.913 START TEST raid1_resize_test 00:23:21.913 ************************************ 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60748 00:23:21.913 Process raid pid: 60748 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60748' 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60748 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60748 ']' 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.913 17:19:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:21.913 [2024-11-26 17:19:51.830648] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:21.913 [2024-11-26 17:19:51.830793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.913 [2024-11-26 17:19:52.019577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.172 [2024-11-26 17:19:52.170633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.430 [2024-11-26 17:19:52.422473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.430 [2024-11-26 17:19:52.422576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.687 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.687 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 Base_1 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 Base_2 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 [2024-11-26 17:19:52.708708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:22.688 [2024-11-26 17:19:52.711176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:22.688 [2024-11-26 17:19:52.711247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:22.688 [2024-11-26 17:19:52.711262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:22.688 [2024-11-26 17:19:52.711546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:22.688 [2024-11-26 17:19:52.711717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:22.688 [2024-11-26 17:19:52.711740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:22.688 [2024-11-26 17:19:52.711941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 [2024-11-26 17:19:52.720659] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:22.688 [2024-11-26 17:19:52.720697] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:23:22.688 true 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:23:22.688 [2024-11-26 17:19:52.736834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 [2024-11-26 17:19:52.780672] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:22.688 [2024-11-26 17:19:52.780710] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:23:22.688 [2024-11-26 17:19:52.780742] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:23:22.688 true 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:23:22.688 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 [2024-11-26 17:19:52.796811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60748 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60748 ']' 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60748 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60748 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.946 killing process with pid 60748 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60748' 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60748 00:23:22.946 [2024-11-26 17:19:52.878502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.946 [2024-11-26 17:19:52.878641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.946 17:19:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60748 00:23:22.946 [2024-11-26 17:19:52.879243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.946 [2024-11-26 17:19:52.879275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:22.946 [2024-11-26 17:19:52.898329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.324 17:19:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:23:24.324 00:23:24.324 real 0m2.444s 00:23:24.324 user 0m2.528s 00:23:24.324 sys 0m0.456s 00:23:24.324 17:19:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.324 17:19:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 ************************************ 00:23:24.324 END TEST raid1_resize_test 00:23:24.324 ************************************ 00:23:24.324 17:19:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:23:24.324 17:19:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:24.324 17:19:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:23:24.324 17:19:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:24.324 17:19:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.324 17:19:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 ************************************ 00:23:24.324 START TEST raid_state_function_test 00:23:24.324 ************************************ 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60811 00:23:24.324 Process raid pid: 60811 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60811' 00:23:24.324 17:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60811 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60811 ']' 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.325 17:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.325 [2024-11-26 17:19:54.337993] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:24.325 [2024-11-26 17:19:54.338138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.583 [2024-11-26 17:19:54.525456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.583 [2024-11-26 17:19:54.678228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.841 [2024-11-26 17:19:54.932422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.841 [2024-11-26 17:19:54.932482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.406 [2024-11-26 17:19:55.234016] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.406 [2024-11-26 17:19:55.234084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.406 [2024-11-26 17:19:55.234096] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.406 [2024-11-26 17:19:55.234110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.406 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.407 "name": "Existed_Raid", 00:23:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.407 "strip_size_kb": 64, 00:23:25.407 "state": "configuring", 00:23:25.407 "raid_level": "raid0", 00:23:25.407 "superblock": false, 00:23:25.407 "num_base_bdevs": 2, 00:23:25.407 "num_base_bdevs_discovered": 0, 00:23:25.407 "num_base_bdevs_operational": 2, 00:23:25.407 "base_bdevs_list": [ 00:23:25.407 { 00:23:25.407 "name": "BaseBdev1", 00:23:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.407 "is_configured": false, 00:23:25.407 "data_offset": 0, 00:23:25.407 "data_size": 0 00:23:25.407 }, 00:23:25.407 { 00:23:25.407 "name": "BaseBdev2", 00:23:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.407 "is_configured": false, 00:23:25.407 "data_offset": 0, 00:23:25.407 "data_size": 0 00:23:25.407 } 00:23:25.407 ] 00:23:25.407 }' 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.407 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 [2024-11-26 17:19:55.657644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.666 [2024-11-26 17:19:55.657692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 [2024-11-26 17:19:55.669630] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.666 [2024-11-26 17:19:55.669680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.666 [2024-11-26 17:19:55.669692] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.666 [2024-11-26 17:19:55.669709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 [2024-11-26 17:19:55.721143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.666 BaseBdev1 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 [ 00:23:25.666 { 00:23:25.666 "name": "BaseBdev1", 00:23:25.666 "aliases": [ 00:23:25.666 "044676c5-b6ad-49e6-a8d4-de851f2b4e58" 00:23:25.666 ], 00:23:25.666 "product_name": "Malloc disk", 00:23:25.666 "block_size": 512, 00:23:25.666 "num_blocks": 65536, 00:23:25.666 "uuid": "044676c5-b6ad-49e6-a8d4-de851f2b4e58", 00:23:25.666 "assigned_rate_limits": { 00:23:25.666 "rw_ios_per_sec": 0, 00:23:25.666 "rw_mbytes_per_sec": 0, 00:23:25.666 "r_mbytes_per_sec": 0, 00:23:25.666 "w_mbytes_per_sec": 0 00:23:25.666 }, 00:23:25.666 "claimed": true, 00:23:25.666 "claim_type": "exclusive_write", 00:23:25.666 "zoned": false, 00:23:25.666 "supported_io_types": { 00:23:25.666 "read": true, 00:23:25.666 "write": true, 00:23:25.666 "unmap": true, 00:23:25.666 "flush": true, 00:23:25.666 "reset": true, 00:23:25.666 "nvme_admin": false, 00:23:25.666 "nvme_io": false, 00:23:25.666 "nvme_io_md": false, 00:23:25.666 "write_zeroes": true, 00:23:25.666 "zcopy": true, 00:23:25.666 "get_zone_info": false, 00:23:25.666 "zone_management": false, 00:23:25.666 "zone_append": false, 00:23:25.666 "compare": false, 00:23:25.666 "compare_and_write": false, 00:23:25.666 "abort": true, 00:23:25.666 "seek_hole": false, 00:23:25.666 "seek_data": false, 00:23:25.666 "copy": true, 00:23:25.666 "nvme_iov_md": false 00:23:25.666 }, 00:23:25.666 "memory_domains": [ 00:23:25.666 { 00:23:25.666 "dma_device_id": "system", 00:23:25.666 "dma_device_type": 1 00:23:25.666 }, 00:23:25.666 { 00:23:25.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.666 "dma_device_type": 2 00:23:25.666 } 00:23:25.666 ], 00:23:25.666 "driver_specific": {} 00:23:25.666 } 00:23:25.666 ] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.666 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.926 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.926 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.926 "name": "Existed_Raid", 00:23:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.926 "strip_size_kb": 64, 00:23:25.926 "state": "configuring", 00:23:25.926 "raid_level": "raid0", 00:23:25.926 "superblock": false, 00:23:25.926 "num_base_bdevs": 2, 00:23:25.926 "num_base_bdevs_discovered": 1, 00:23:25.926 "num_base_bdevs_operational": 2, 00:23:25.926 "base_bdevs_list": [ 00:23:25.926 { 00:23:25.926 "name": "BaseBdev1", 00:23:25.926 "uuid": "044676c5-b6ad-49e6-a8d4-de851f2b4e58", 00:23:25.926 "is_configured": true, 00:23:25.926 "data_offset": 0, 00:23:25.926 "data_size": 65536 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "name": "BaseBdev2", 00:23:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.926 "is_configured": false, 00:23:25.926 "data_offset": 0, 00:23:25.926 "data_size": 0 00:23:25.926 } 00:23:25.926 ] 00:23:25.926 }' 00:23:25.926 17:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.926 17:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 [2024-11-26 17:19:56.192670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:26.185 [2024-11-26 17:19:56.192756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 [2024-11-26 17:19:56.204700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.185 [2024-11-26 17:19:56.207237] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:26.185 [2024-11-26 17:19:56.207287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.185 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.186 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.186 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.186 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.186 "name": "Existed_Raid", 00:23:26.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.186 "strip_size_kb": 64, 00:23:26.186 "state": "configuring", 00:23:26.186 "raid_level": "raid0", 00:23:26.186 "superblock": false, 00:23:26.186 "num_base_bdevs": 2, 00:23:26.186 "num_base_bdevs_discovered": 1, 00:23:26.186 "num_base_bdevs_operational": 2, 00:23:26.186 "base_bdevs_list": [ 00:23:26.186 { 00:23:26.186 "name": "BaseBdev1", 00:23:26.186 "uuid": "044676c5-b6ad-49e6-a8d4-de851f2b4e58", 00:23:26.186 "is_configured": true, 00:23:26.186 "data_offset": 0, 00:23:26.186 "data_size": 65536 00:23:26.186 }, 00:23:26.186 { 00:23:26.186 "name": "BaseBdev2", 00:23:26.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.186 "is_configured": false, 00:23:26.186 "data_offset": 0, 00:23:26.186 "data_size": 0 00:23:26.186 } 00:23:26.186 ] 00:23:26.186 }' 00:23:26.186 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.186 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.753 [2024-11-26 17:19:56.691414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.753 [2024-11-26 17:19:56.691486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:26.753 [2024-11-26 17:19:56.691499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:26.753 [2024-11-26 17:19:56.691941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:26.753 [2024-11-26 17:19:56.692140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:26.753 [2024-11-26 17:19:56.692163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:26.753 [2024-11-26 17:19:56.692484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.753 BaseBdev2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.753 [ 00:23:26.753 { 00:23:26.753 "name": "BaseBdev2", 00:23:26.753 "aliases": [ 00:23:26.753 "3a65d9f4-ce36-4115-852e-ec3205197c68" 00:23:26.753 ], 00:23:26.753 "product_name": "Malloc disk", 00:23:26.753 "block_size": 512, 00:23:26.753 "num_blocks": 65536, 00:23:26.753 "uuid": "3a65d9f4-ce36-4115-852e-ec3205197c68", 00:23:26.753 "assigned_rate_limits": { 00:23:26.753 "rw_ios_per_sec": 0, 00:23:26.753 "rw_mbytes_per_sec": 0, 00:23:26.753 "r_mbytes_per_sec": 0, 00:23:26.753 "w_mbytes_per_sec": 0 00:23:26.753 }, 00:23:26.753 "claimed": true, 00:23:26.753 "claim_type": "exclusive_write", 00:23:26.753 "zoned": false, 00:23:26.753 "supported_io_types": { 00:23:26.753 "read": true, 00:23:26.753 "write": true, 00:23:26.753 "unmap": true, 00:23:26.753 "flush": true, 00:23:26.753 "reset": true, 00:23:26.753 "nvme_admin": false, 00:23:26.753 "nvme_io": false, 00:23:26.753 "nvme_io_md": false, 00:23:26.753 "write_zeroes": true, 00:23:26.753 "zcopy": true, 00:23:26.753 "get_zone_info": false, 00:23:26.753 "zone_management": false, 00:23:26.753 "zone_append": false, 00:23:26.753 "compare": false, 00:23:26.753 "compare_and_write": false, 00:23:26.753 "abort": true, 00:23:26.753 "seek_hole": false, 00:23:26.753 "seek_data": false, 00:23:26.753 "copy": true, 00:23:26.753 "nvme_iov_md": false 00:23:26.753 }, 00:23:26.753 "memory_domains": [ 00:23:26.753 { 00:23:26.753 "dma_device_id": "system", 00:23:26.753 "dma_device_type": 1 00:23:26.753 }, 00:23:26.753 { 00:23:26.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.753 "dma_device_type": 2 00:23:26.753 } 00:23:26.753 ], 00:23:26.753 "driver_specific": {} 00:23:26.753 } 00:23:26.753 ] 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.753 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.754 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.754 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.754 "name": "Existed_Raid", 00:23:26.754 "uuid": "0cf9a24a-cb18-4a2a-b1c1-a04be11cc044", 00:23:26.754 "strip_size_kb": 64, 00:23:26.754 "state": "online", 00:23:26.754 "raid_level": "raid0", 00:23:26.754 "superblock": false, 00:23:26.754 "num_base_bdevs": 2, 00:23:26.754 "num_base_bdevs_discovered": 2, 00:23:26.754 "num_base_bdevs_operational": 2, 00:23:26.754 "base_bdevs_list": [ 00:23:26.754 { 00:23:26.754 "name": "BaseBdev1", 00:23:26.754 "uuid": "044676c5-b6ad-49e6-a8d4-de851f2b4e58", 00:23:26.754 "is_configured": true, 00:23:26.754 "data_offset": 0, 00:23:26.754 "data_size": 65536 00:23:26.754 }, 00:23:26.754 { 00:23:26.754 "name": "BaseBdev2", 00:23:26.754 "uuid": "3a65d9f4-ce36-4115-852e-ec3205197c68", 00:23:26.754 "is_configured": true, 00:23:26.754 "data_offset": 0, 00:23:26.754 "data_size": 65536 00:23:26.754 } 00:23:26.754 ] 00:23:26.754 }' 00:23:26.754 17:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.754 17:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 [2024-11-26 17:19:57.231035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.321 "name": "Existed_Raid", 00:23:27.321 "aliases": [ 00:23:27.321 "0cf9a24a-cb18-4a2a-b1c1-a04be11cc044" 00:23:27.321 ], 00:23:27.321 "product_name": "Raid Volume", 00:23:27.321 "block_size": 512, 00:23:27.321 "num_blocks": 131072, 00:23:27.321 "uuid": "0cf9a24a-cb18-4a2a-b1c1-a04be11cc044", 00:23:27.321 "assigned_rate_limits": { 00:23:27.321 "rw_ios_per_sec": 0, 00:23:27.321 "rw_mbytes_per_sec": 0, 00:23:27.321 "r_mbytes_per_sec": 0, 00:23:27.321 "w_mbytes_per_sec": 0 00:23:27.321 }, 00:23:27.321 "claimed": false, 00:23:27.321 "zoned": false, 00:23:27.321 "supported_io_types": { 00:23:27.321 "read": true, 00:23:27.321 "write": true, 00:23:27.321 "unmap": true, 00:23:27.321 "flush": true, 00:23:27.321 "reset": true, 00:23:27.321 "nvme_admin": false, 00:23:27.321 "nvme_io": false, 00:23:27.321 "nvme_io_md": false, 00:23:27.321 "write_zeroes": true, 00:23:27.321 "zcopy": false, 00:23:27.321 "get_zone_info": false, 00:23:27.321 "zone_management": false, 00:23:27.321 "zone_append": false, 00:23:27.321 "compare": false, 00:23:27.321 "compare_and_write": false, 00:23:27.321 "abort": false, 00:23:27.321 "seek_hole": false, 00:23:27.321 "seek_data": false, 00:23:27.321 "copy": false, 00:23:27.321 "nvme_iov_md": false 00:23:27.321 }, 00:23:27.321 "memory_domains": [ 00:23:27.321 { 00:23:27.321 "dma_device_id": "system", 00:23:27.321 "dma_device_type": 1 00:23:27.321 }, 00:23:27.321 { 00:23:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.321 "dma_device_type": 2 00:23:27.321 }, 00:23:27.321 { 00:23:27.321 "dma_device_id": "system", 00:23:27.321 "dma_device_type": 1 00:23:27.321 }, 00:23:27.321 { 00:23:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.321 "dma_device_type": 2 00:23:27.321 } 00:23:27.321 ], 00:23:27.321 "driver_specific": { 00:23:27.321 "raid": { 00:23:27.321 "uuid": "0cf9a24a-cb18-4a2a-b1c1-a04be11cc044", 00:23:27.321 "strip_size_kb": 64, 00:23:27.321 "state": "online", 00:23:27.321 "raid_level": "raid0", 00:23:27.321 "superblock": false, 00:23:27.321 "num_base_bdevs": 2, 00:23:27.321 "num_base_bdevs_discovered": 2, 00:23:27.321 "num_base_bdevs_operational": 2, 00:23:27.321 "base_bdevs_list": [ 00:23:27.321 { 00:23:27.321 "name": "BaseBdev1", 00:23:27.321 "uuid": "044676c5-b6ad-49e6-a8d4-de851f2b4e58", 00:23:27.321 "is_configured": true, 00:23:27.321 "data_offset": 0, 00:23:27.321 "data_size": 65536 00:23:27.321 }, 00:23:27.321 { 00:23:27.321 "name": "BaseBdev2", 00:23:27.321 "uuid": "3a65d9f4-ce36-4115-852e-ec3205197c68", 00:23:27.321 "is_configured": true, 00:23:27.321 "data_offset": 0, 00:23:27.321 "data_size": 65536 00:23:27.321 } 00:23:27.321 ] 00:23:27.321 } 00:23:27.321 } 00:23:27.321 }' 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:27.321 BaseBdev2' 00:23:27.321 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.322 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.580 [2024-11-26 17:19:57.438504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:27.580 [2024-11-26 17:19:57.438573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.580 [2024-11-26 17:19:57.438651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.580 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.581 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.581 "name": "Existed_Raid", 00:23:27.581 "uuid": "0cf9a24a-cb18-4a2a-b1c1-a04be11cc044", 00:23:27.581 "strip_size_kb": 64, 00:23:27.581 "state": "offline", 00:23:27.581 "raid_level": "raid0", 00:23:27.581 "superblock": false, 00:23:27.581 "num_base_bdevs": 2, 00:23:27.581 "num_base_bdevs_discovered": 1, 00:23:27.581 "num_base_bdevs_operational": 1, 00:23:27.581 "base_bdevs_list": [ 00:23:27.581 { 00:23:27.581 "name": null, 00:23:27.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.581 "is_configured": false, 00:23:27.581 "data_offset": 0, 00:23:27.581 "data_size": 65536 00:23:27.581 }, 00:23:27.581 { 00:23:27.581 "name": "BaseBdev2", 00:23:27.581 "uuid": "3a65d9f4-ce36-4115-852e-ec3205197c68", 00:23:27.581 "is_configured": true, 00:23:27.581 "data_offset": 0, 00:23:27.581 "data_size": 65536 00:23:27.581 } 00:23:27.581 ] 00:23:27.581 }' 00:23:27.581 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.581 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.840 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.099 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:28.099 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:28.099 17:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:28.099 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.099 17:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 [2024-11-26 17:19:57.977716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:28.099 [2024-11-26 17:19:57.977787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60811 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60811 ']' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60811 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60811 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.099 killing process with pid 60811 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60811' 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60811 00:23:28.099 17:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60811 00:23:28.099 [2024-11-26 17:19:58.171240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:28.099 [2024-11-26 17:19:58.188824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:29.476 00:23:29.476 real 0m5.169s 00:23:29.476 user 0m7.349s 00:23:29.476 sys 0m0.975s 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 ************************************ 00:23:29.476 END TEST raid_state_function_test 00:23:29.476 ************************************ 00:23:29.476 17:19:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:23:29.476 17:19:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:29.476 17:19:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.476 17:19:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 ************************************ 00:23:29.476 START TEST raid_state_function_test_sb 00:23:29.476 ************************************ 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:29.476 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61064 00:23:29.477 Process raid pid: 61064 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61064' 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61064 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61064 ']' 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.477 17:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.477 [2024-11-26 17:19:59.577219] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:29.477 [2024-11-26 17:19:59.577355] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.737 [2024-11-26 17:19:59.761397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.996 [2024-11-26 17:19:59.910013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.282 [2024-11-26 17:20:00.156485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.282 [2024-11-26 17:20:00.156555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.540 [2024-11-26 17:20:00.456525] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:30.540 [2024-11-26 17:20:00.456589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:30.540 [2024-11-26 17:20:00.456602] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:30.540 [2024-11-26 17:20:00.456616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.540 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.541 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.541 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.541 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.541 "name": "Existed_Raid", 00:23:30.541 "uuid": "467ca797-67cc-4044-8cca-2d32d026dbe1", 00:23:30.541 "strip_size_kb": 64, 00:23:30.541 "state": "configuring", 00:23:30.541 "raid_level": "raid0", 00:23:30.541 "superblock": true, 00:23:30.541 "num_base_bdevs": 2, 00:23:30.541 "num_base_bdevs_discovered": 0, 00:23:30.541 "num_base_bdevs_operational": 2, 00:23:30.541 "base_bdevs_list": [ 00:23:30.541 { 00:23:30.541 "name": "BaseBdev1", 00:23:30.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.541 "is_configured": false, 00:23:30.541 "data_offset": 0, 00:23:30.541 "data_size": 0 00:23:30.541 }, 00:23:30.541 { 00:23:30.541 "name": "BaseBdev2", 00:23:30.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.541 "is_configured": false, 00:23:30.541 "data_offset": 0, 00:23:30.541 "data_size": 0 00:23:30.541 } 00:23:30.541 ] 00:23:30.541 }' 00:23:30.541 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.541 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 [2024-11-26 17:20:00.887842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:30.909 [2024-11-26 17:20:00.887890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 [2024-11-26 17:20:00.899822] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:30.909 [2024-11-26 17:20:00.899875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:30.909 [2024-11-26 17:20:00.899887] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:30.909 [2024-11-26 17:20:00.899904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 [2024-11-26 17:20:00.952872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:30.909 BaseBdev1 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 [ 00:23:30.909 { 00:23:30.909 "name": "BaseBdev1", 00:23:30.909 "aliases": [ 00:23:30.909 "c54e3096-8321-470d-8c0d-0ab0c864eba3" 00:23:30.909 ], 00:23:30.909 "product_name": "Malloc disk", 00:23:30.909 "block_size": 512, 00:23:30.909 "num_blocks": 65536, 00:23:30.909 "uuid": "c54e3096-8321-470d-8c0d-0ab0c864eba3", 00:23:30.909 "assigned_rate_limits": { 00:23:30.909 "rw_ios_per_sec": 0, 00:23:30.909 "rw_mbytes_per_sec": 0, 00:23:30.909 "r_mbytes_per_sec": 0, 00:23:30.909 "w_mbytes_per_sec": 0 00:23:30.909 }, 00:23:30.909 "claimed": true, 00:23:30.909 "claim_type": "exclusive_write", 00:23:30.909 "zoned": false, 00:23:30.909 "supported_io_types": { 00:23:30.909 "read": true, 00:23:30.909 "write": true, 00:23:30.909 "unmap": true, 00:23:30.909 "flush": true, 00:23:30.909 "reset": true, 00:23:30.909 "nvme_admin": false, 00:23:30.909 "nvme_io": false, 00:23:30.909 "nvme_io_md": false, 00:23:30.909 "write_zeroes": true, 00:23:30.909 "zcopy": true, 00:23:30.909 "get_zone_info": false, 00:23:30.909 "zone_management": false, 00:23:30.909 "zone_append": false, 00:23:30.909 "compare": false, 00:23:30.909 "compare_and_write": false, 00:23:30.909 "abort": true, 00:23:30.909 "seek_hole": false, 00:23:30.909 "seek_data": false, 00:23:30.909 "copy": true, 00:23:30.909 "nvme_iov_md": false 00:23:30.909 }, 00:23:30.909 "memory_domains": [ 00:23:30.909 { 00:23:30.909 "dma_device_id": "system", 00:23:30.909 "dma_device_type": 1 00:23:30.909 }, 00:23:30.909 { 00:23:30.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.909 "dma_device_type": 2 00:23:30.909 } 00:23:30.909 ], 00:23:30.909 "driver_specific": {} 00:23:30.909 } 00:23:30.909 ] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.909 17:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.909 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.169 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.169 "name": "Existed_Raid", 00:23:31.169 "uuid": "8e00329c-5c9b-492a-9b60-95ab0b08f22a", 00:23:31.169 "strip_size_kb": 64, 00:23:31.169 "state": "configuring", 00:23:31.169 "raid_level": "raid0", 00:23:31.169 "superblock": true, 00:23:31.169 "num_base_bdevs": 2, 00:23:31.169 "num_base_bdevs_discovered": 1, 00:23:31.169 "num_base_bdevs_operational": 2, 00:23:31.169 "base_bdevs_list": [ 00:23:31.169 { 00:23:31.169 "name": "BaseBdev1", 00:23:31.169 "uuid": "c54e3096-8321-470d-8c0d-0ab0c864eba3", 00:23:31.169 "is_configured": true, 00:23:31.169 "data_offset": 2048, 00:23:31.169 "data_size": 63488 00:23:31.169 }, 00:23:31.169 { 00:23:31.169 "name": "BaseBdev2", 00:23:31.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.169 "is_configured": false, 00:23:31.169 "data_offset": 0, 00:23:31.169 "data_size": 0 00:23:31.169 } 00:23:31.169 ] 00:23:31.169 }' 00:23:31.169 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.169 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.428 [2024-11-26 17:20:01.388358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.428 [2024-11-26 17:20:01.388432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.428 [2024-11-26 17:20:01.396458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.428 [2024-11-26 17:20:01.398961] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.428 [2024-11-26 17:20:01.399009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:31.428 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.429 "name": "Existed_Raid", 00:23:31.429 "uuid": "e41c21ab-7aa6-479a-81fb-d1a60f9367b6", 00:23:31.429 "strip_size_kb": 64, 00:23:31.429 "state": "configuring", 00:23:31.429 "raid_level": "raid0", 00:23:31.429 "superblock": true, 00:23:31.429 "num_base_bdevs": 2, 00:23:31.429 "num_base_bdevs_discovered": 1, 00:23:31.429 "num_base_bdevs_operational": 2, 00:23:31.429 "base_bdevs_list": [ 00:23:31.429 { 00:23:31.429 "name": "BaseBdev1", 00:23:31.429 "uuid": "c54e3096-8321-470d-8c0d-0ab0c864eba3", 00:23:31.429 "is_configured": true, 00:23:31.429 "data_offset": 2048, 00:23:31.429 "data_size": 63488 00:23:31.429 }, 00:23:31.429 { 00:23:31.429 "name": "BaseBdev2", 00:23:31.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.429 "is_configured": false, 00:23:31.429 "data_offset": 0, 00:23:31.429 "data_size": 0 00:23:31.429 } 00:23:31.429 ] 00:23:31.429 }' 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.429 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.687 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:31.687 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.687 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.687 [2024-11-26 17:20:01.796198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.687 [2024-11-26 17:20:01.796472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:31.687 [2024-11-26 17:20:01.796491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:31.687 [2024-11-26 17:20:01.796833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:31.687 [2024-11-26 17:20:01.797009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:31.687 [2024-11-26 17:20:01.797027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:31.687 BaseBdev2 00:23:31.687 [2024-11-26 17:20:01.797188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.687 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.688 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.946 [ 00:23:31.946 { 00:23:31.946 "name": "BaseBdev2", 00:23:31.946 "aliases": [ 00:23:31.946 "7275cdf3-36fd-4fd6-990c-64d2a0ee8cb7" 00:23:31.946 ], 00:23:31.946 "product_name": "Malloc disk", 00:23:31.946 "block_size": 512, 00:23:31.946 "num_blocks": 65536, 00:23:31.946 "uuid": "7275cdf3-36fd-4fd6-990c-64d2a0ee8cb7", 00:23:31.946 "assigned_rate_limits": { 00:23:31.946 "rw_ios_per_sec": 0, 00:23:31.946 "rw_mbytes_per_sec": 0, 00:23:31.946 "r_mbytes_per_sec": 0, 00:23:31.946 "w_mbytes_per_sec": 0 00:23:31.946 }, 00:23:31.946 "claimed": true, 00:23:31.946 "claim_type": "exclusive_write", 00:23:31.946 "zoned": false, 00:23:31.946 "supported_io_types": { 00:23:31.946 "read": true, 00:23:31.946 "write": true, 00:23:31.946 "unmap": true, 00:23:31.946 "flush": true, 00:23:31.946 "reset": true, 00:23:31.946 "nvme_admin": false, 00:23:31.946 "nvme_io": false, 00:23:31.946 "nvme_io_md": false, 00:23:31.946 "write_zeroes": true, 00:23:31.946 "zcopy": true, 00:23:31.946 "get_zone_info": false, 00:23:31.946 "zone_management": false, 00:23:31.946 "zone_append": false, 00:23:31.946 "compare": false, 00:23:31.946 "compare_and_write": false, 00:23:31.946 "abort": true, 00:23:31.946 "seek_hole": false, 00:23:31.946 "seek_data": false, 00:23:31.946 "copy": true, 00:23:31.946 "nvme_iov_md": false 00:23:31.946 }, 00:23:31.946 "memory_domains": [ 00:23:31.946 { 00:23:31.946 "dma_device_id": "system", 00:23:31.946 "dma_device_type": 1 00:23:31.946 }, 00:23:31.946 { 00:23:31.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.946 "dma_device_type": 2 00:23:31.946 } 00:23:31.946 ], 00:23:31.946 "driver_specific": {} 00:23:31.946 } 00:23:31.946 ] 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:23:31.946 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.947 "name": "Existed_Raid", 00:23:31.947 "uuid": "e41c21ab-7aa6-479a-81fb-d1a60f9367b6", 00:23:31.947 "strip_size_kb": 64, 00:23:31.947 "state": "online", 00:23:31.947 "raid_level": "raid0", 00:23:31.947 "superblock": true, 00:23:31.947 "num_base_bdevs": 2, 00:23:31.947 "num_base_bdevs_discovered": 2, 00:23:31.947 "num_base_bdevs_operational": 2, 00:23:31.947 "base_bdevs_list": [ 00:23:31.947 { 00:23:31.947 "name": "BaseBdev1", 00:23:31.947 "uuid": "c54e3096-8321-470d-8c0d-0ab0c864eba3", 00:23:31.947 "is_configured": true, 00:23:31.947 "data_offset": 2048, 00:23:31.947 "data_size": 63488 00:23:31.947 }, 00:23:31.947 { 00:23:31.947 "name": "BaseBdev2", 00:23:31.947 "uuid": "7275cdf3-36fd-4fd6-990c-64d2a0ee8cb7", 00:23:31.947 "is_configured": true, 00:23:31.947 "data_offset": 2048, 00:23:31.947 "data_size": 63488 00:23:31.947 } 00:23:31.947 ] 00:23:31.947 }' 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.947 17:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:32.205 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:32.206 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.206 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.206 [2024-11-26 17:20:02.267936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.206 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.206 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.206 "name": "Existed_Raid", 00:23:32.206 "aliases": [ 00:23:32.206 "e41c21ab-7aa6-479a-81fb-d1a60f9367b6" 00:23:32.206 ], 00:23:32.206 "product_name": "Raid Volume", 00:23:32.206 "block_size": 512, 00:23:32.206 "num_blocks": 126976, 00:23:32.206 "uuid": "e41c21ab-7aa6-479a-81fb-d1a60f9367b6", 00:23:32.206 "assigned_rate_limits": { 00:23:32.206 "rw_ios_per_sec": 0, 00:23:32.206 "rw_mbytes_per_sec": 0, 00:23:32.206 "r_mbytes_per_sec": 0, 00:23:32.206 "w_mbytes_per_sec": 0 00:23:32.206 }, 00:23:32.206 "claimed": false, 00:23:32.206 "zoned": false, 00:23:32.206 "supported_io_types": { 00:23:32.206 "read": true, 00:23:32.206 "write": true, 00:23:32.206 "unmap": true, 00:23:32.206 "flush": true, 00:23:32.206 "reset": true, 00:23:32.206 "nvme_admin": false, 00:23:32.206 "nvme_io": false, 00:23:32.206 "nvme_io_md": false, 00:23:32.206 "write_zeroes": true, 00:23:32.206 "zcopy": false, 00:23:32.206 "get_zone_info": false, 00:23:32.206 "zone_management": false, 00:23:32.206 "zone_append": false, 00:23:32.206 "compare": false, 00:23:32.206 "compare_and_write": false, 00:23:32.206 "abort": false, 00:23:32.206 "seek_hole": false, 00:23:32.206 "seek_data": false, 00:23:32.206 "copy": false, 00:23:32.206 "nvme_iov_md": false 00:23:32.206 }, 00:23:32.206 "memory_domains": [ 00:23:32.206 { 00:23:32.206 "dma_device_id": "system", 00:23:32.206 "dma_device_type": 1 00:23:32.206 }, 00:23:32.206 { 00:23:32.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.206 "dma_device_type": 2 00:23:32.206 }, 00:23:32.206 { 00:23:32.206 "dma_device_id": "system", 00:23:32.206 "dma_device_type": 1 00:23:32.206 }, 00:23:32.206 { 00:23:32.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.206 "dma_device_type": 2 00:23:32.206 } 00:23:32.206 ], 00:23:32.206 "driver_specific": { 00:23:32.206 "raid": { 00:23:32.206 "uuid": "e41c21ab-7aa6-479a-81fb-d1a60f9367b6", 00:23:32.206 "strip_size_kb": 64, 00:23:32.206 "state": "online", 00:23:32.206 "raid_level": "raid0", 00:23:32.206 "superblock": true, 00:23:32.206 "num_base_bdevs": 2, 00:23:32.206 "num_base_bdevs_discovered": 2, 00:23:32.206 "num_base_bdevs_operational": 2, 00:23:32.206 "base_bdevs_list": [ 00:23:32.206 { 00:23:32.206 "name": "BaseBdev1", 00:23:32.206 "uuid": "c54e3096-8321-470d-8c0d-0ab0c864eba3", 00:23:32.206 "is_configured": true, 00:23:32.206 "data_offset": 2048, 00:23:32.206 "data_size": 63488 00:23:32.206 }, 00:23:32.206 { 00:23:32.206 "name": "BaseBdev2", 00:23:32.206 "uuid": "7275cdf3-36fd-4fd6-990c-64d2a0ee8cb7", 00:23:32.206 "is_configured": true, 00:23:32.206 "data_offset": 2048, 00:23:32.206 "data_size": 63488 00:23:32.206 } 00:23:32.206 ] 00:23:32.206 } 00:23:32.206 } 00:23:32.206 }' 00:23:32.206 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:32.464 BaseBdev2' 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.464 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.465 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.465 [2024-11-26 17:20:02.499339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:32.465 [2024-11-26 17:20:02.499385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.465 [2024-11-26 17:20:02.499451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.723 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.724 "name": "Existed_Raid", 00:23:32.724 "uuid": "e41c21ab-7aa6-479a-81fb-d1a60f9367b6", 00:23:32.724 "strip_size_kb": 64, 00:23:32.724 "state": "offline", 00:23:32.724 "raid_level": "raid0", 00:23:32.724 "superblock": true, 00:23:32.724 "num_base_bdevs": 2, 00:23:32.724 "num_base_bdevs_discovered": 1, 00:23:32.724 "num_base_bdevs_operational": 1, 00:23:32.724 "base_bdevs_list": [ 00:23:32.724 { 00:23:32.724 "name": null, 00:23:32.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.724 "is_configured": false, 00:23:32.724 "data_offset": 0, 00:23:32.724 "data_size": 63488 00:23:32.724 }, 00:23:32.724 { 00:23:32.724 "name": "BaseBdev2", 00:23:32.724 "uuid": "7275cdf3-36fd-4fd6-990c-64d2a0ee8cb7", 00:23:32.724 "is_configured": true, 00:23:32.724 "data_offset": 2048, 00:23:32.724 "data_size": 63488 00:23:32.724 } 00:23:32.724 ] 00:23:32.724 }' 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.724 17:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.982 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.982 [2024-11-26 17:20:03.079743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:32.982 [2024-11-26 17:20:03.079827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61064 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61064 ']' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61064 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61064 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61064' 00:23:33.241 killing process with pid 61064 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61064 00:23:33.241 [2024-11-26 17:20:03.291733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:33.241 17:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61064 00:23:33.241 [2024-11-26 17:20:03.309197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:34.619 ************************************ 00:23:34.619 END TEST raid_state_function_test_sb 00:23:34.619 ************************************ 00:23:34.619 17:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:34.619 00:23:34.619 real 0m5.062s 00:23:34.619 user 0m7.108s 00:23:34.619 sys 0m0.958s 00:23:34.619 17:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.619 17:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.619 17:20:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:23:34.619 17:20:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:34.619 17:20:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.619 17:20:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:34.619 ************************************ 00:23:34.619 START TEST raid_superblock_test 00:23:34.619 ************************************ 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61316 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61316 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61316 ']' 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.619 17:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.619 [2024-11-26 17:20:04.710633] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:34.619 [2024-11-26 17:20:04.710782] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:23:34.879 [2024-11-26 17:20:04.897748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.138 [2024-11-26 17:20:05.046350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.397 [2024-11-26 17:20:05.295685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.397 [2024-11-26 17:20:05.295990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.662 malloc1 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.662 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.663 [2024-11-26 17:20:05.661974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:35.663 [2024-11-26 17:20:05.662225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.663 [2024-11-26 17:20:05.662318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:35.663 [2024-11-26 17:20:05.662421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.663 [2024-11-26 17:20:05.665570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.663 [2024-11-26 17:20:05.665738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:35.663 pt1 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.663 malloc2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.663 [2024-11-26 17:20:05.723271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.663 [2024-11-26 17:20:05.723350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.663 [2024-11-26 17:20:05.723384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:35.663 [2024-11-26 17:20:05.723397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.663 [2024-11-26 17:20:05.726141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.663 [2024-11-26 17:20:05.726185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.663 pt2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.663 [2024-11-26 17:20:05.735314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:35.663 [2024-11-26 17:20:05.737650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.663 [2024-11-26 17:20:05.737838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:35.663 [2024-11-26 17:20:05.737853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:35.663 [2024-11-26 17:20:05.738148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:35.663 [2024-11-26 17:20:05.738304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:35.663 [2024-11-26 17:20:05.738320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:35.663 [2024-11-26 17:20:05.738498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.663 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.664 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.664 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.664 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.923 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.923 "name": "raid_bdev1", 00:23:35.923 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:35.923 "strip_size_kb": 64, 00:23:35.923 "state": "online", 00:23:35.923 "raid_level": "raid0", 00:23:35.923 "superblock": true, 00:23:35.923 "num_base_bdevs": 2, 00:23:35.923 "num_base_bdevs_discovered": 2, 00:23:35.923 "num_base_bdevs_operational": 2, 00:23:35.923 "base_bdevs_list": [ 00:23:35.923 { 00:23:35.923 "name": "pt1", 00:23:35.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:35.923 "is_configured": true, 00:23:35.923 "data_offset": 2048, 00:23:35.923 "data_size": 63488 00:23:35.923 }, 00:23:35.923 { 00:23:35.923 "name": "pt2", 00:23:35.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.923 "is_configured": true, 00:23:35.923 "data_offset": 2048, 00:23:35.923 "data_size": 63488 00:23:35.923 } 00:23:35.923 ] 00:23:35.923 }' 00:23:35.923 17:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.923 17:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.182 [2024-11-26 17:20:06.171023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.182 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.182 "name": "raid_bdev1", 00:23:36.182 "aliases": [ 00:23:36.182 "d4436575-8048-48c9-9298-04df0cec3ad3" 00:23:36.182 ], 00:23:36.182 "product_name": "Raid Volume", 00:23:36.182 "block_size": 512, 00:23:36.182 "num_blocks": 126976, 00:23:36.182 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:36.182 "assigned_rate_limits": { 00:23:36.182 "rw_ios_per_sec": 0, 00:23:36.182 "rw_mbytes_per_sec": 0, 00:23:36.182 "r_mbytes_per_sec": 0, 00:23:36.182 "w_mbytes_per_sec": 0 00:23:36.182 }, 00:23:36.182 "claimed": false, 00:23:36.182 "zoned": false, 00:23:36.182 "supported_io_types": { 00:23:36.182 "read": true, 00:23:36.182 "write": true, 00:23:36.182 "unmap": true, 00:23:36.182 "flush": true, 00:23:36.182 "reset": true, 00:23:36.182 "nvme_admin": false, 00:23:36.182 "nvme_io": false, 00:23:36.182 "nvme_io_md": false, 00:23:36.182 "write_zeroes": true, 00:23:36.182 "zcopy": false, 00:23:36.182 "get_zone_info": false, 00:23:36.182 "zone_management": false, 00:23:36.182 "zone_append": false, 00:23:36.182 "compare": false, 00:23:36.182 "compare_and_write": false, 00:23:36.182 "abort": false, 00:23:36.183 "seek_hole": false, 00:23:36.183 "seek_data": false, 00:23:36.183 "copy": false, 00:23:36.183 "nvme_iov_md": false 00:23:36.183 }, 00:23:36.183 "memory_domains": [ 00:23:36.183 { 00:23:36.183 "dma_device_id": "system", 00:23:36.183 "dma_device_type": 1 00:23:36.183 }, 00:23:36.183 { 00:23:36.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.183 "dma_device_type": 2 00:23:36.183 }, 00:23:36.183 { 00:23:36.183 "dma_device_id": "system", 00:23:36.183 "dma_device_type": 1 00:23:36.183 }, 00:23:36.183 { 00:23:36.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.183 "dma_device_type": 2 00:23:36.183 } 00:23:36.183 ], 00:23:36.183 "driver_specific": { 00:23:36.183 "raid": { 00:23:36.183 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:36.183 "strip_size_kb": 64, 00:23:36.183 "state": "online", 00:23:36.183 "raid_level": "raid0", 00:23:36.183 "superblock": true, 00:23:36.183 "num_base_bdevs": 2, 00:23:36.183 "num_base_bdevs_discovered": 2, 00:23:36.183 "num_base_bdevs_operational": 2, 00:23:36.183 "base_bdevs_list": [ 00:23:36.183 { 00:23:36.183 "name": "pt1", 00:23:36.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:36.183 "is_configured": true, 00:23:36.183 "data_offset": 2048, 00:23:36.183 "data_size": 63488 00:23:36.183 }, 00:23:36.183 { 00:23:36.183 "name": "pt2", 00:23:36.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.183 "is_configured": true, 00:23:36.183 "data_offset": 2048, 00:23:36.183 "data_size": 63488 00:23:36.183 } 00:23:36.183 ] 00:23:36.183 } 00:23:36.183 } 00:23:36.183 }' 00:23:36.183 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:36.183 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:36.183 pt2' 00:23:36.183 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 [2024-11-26 17:20:06.398717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d4436575-8048-48c9-9298-04df0cec3ad3 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d4436575-8048-48c9-9298-04df0cec3ad3 ']' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 [2024-11-26 17:20:06.438314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.441 [2024-11-26 17:20:06.438473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.441 [2024-11-26 17:20:06.438717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.441 [2024-11-26 17:20:06.438872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.441 [2024-11-26 17:20:06.438981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:36.441 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-11-26 17:20:06.562195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:36.701 [2024-11-26 17:20:06.564798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:36.701 [2024-11-26 17:20:06.564972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:36.701 [2024-11-26 17:20:06.565162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:36.701 [2024-11-26 17:20:06.565300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.701 [2024-11-26 17:20:06.565347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:36.701 request: 00:23:36.701 { 00:23:36.701 "name": "raid_bdev1", 00:23:36.701 "raid_level": "raid0", 00:23:36.701 "base_bdevs": [ 00:23:36.701 "malloc1", 00:23:36.701 "malloc2" 00:23:36.701 ], 00:23:36.701 "strip_size_kb": 64, 00:23:36.701 "superblock": false, 00:23:36.701 "method": "bdev_raid_create", 00:23:36.701 "req_id": 1 00:23:36.701 } 00:23:36.701 Got JSON-RPC error response 00:23:36.701 response: 00:23:36.701 { 00:23:36.701 "code": -17, 00:23:36.701 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:36.701 } 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-11-26 17:20:06.610237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:36.701 [2024-11-26 17:20:06.610338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.701 [2024-11-26 17:20:06.610365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:36.701 [2024-11-26 17:20:06.610381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.701 [2024-11-26 17:20:06.613336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.701 [2024-11-26 17:20:06.613588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:36.701 [2024-11-26 17:20:06.613751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:36.701 [2024-11-26 17:20:06.613841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:36.701 pt1 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.701 "name": "raid_bdev1", 00:23:36.701 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:36.701 "strip_size_kb": 64, 00:23:36.701 "state": "configuring", 00:23:36.701 "raid_level": "raid0", 00:23:36.701 "superblock": true, 00:23:36.701 "num_base_bdevs": 2, 00:23:36.701 "num_base_bdevs_discovered": 1, 00:23:36.701 "num_base_bdevs_operational": 2, 00:23:36.701 "base_bdevs_list": [ 00:23:36.701 { 00:23:36.701 "name": "pt1", 00:23:36.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:36.701 "is_configured": true, 00:23:36.701 "data_offset": 2048, 00:23:36.701 "data_size": 63488 00:23:36.701 }, 00:23:36.701 { 00:23:36.701 "name": null, 00:23:36.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.701 "is_configured": false, 00:23:36.701 "data_offset": 2048, 00:23:36.701 "data_size": 63488 00:23:36.701 } 00:23:36.701 ] 00:23:36.701 }' 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.701 17:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.980 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.981 [2024-11-26 17:20:07.041676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:36.981 [2024-11-26 17:20:07.041779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.981 [2024-11-26 17:20:07.041808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:36.981 [2024-11-26 17:20:07.041824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.981 [2024-11-26 17:20:07.042397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.981 [2024-11-26 17:20:07.042423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:36.981 [2024-11-26 17:20:07.042561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:36.981 [2024-11-26 17:20:07.042603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:36.981 [2024-11-26 17:20:07.042730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:36.981 [2024-11-26 17:20:07.042810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:36.981 [2024-11-26 17:20:07.043150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:36.981 [2024-11-26 17:20:07.043321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:36.981 [2024-11-26 17:20:07.043333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:36.981 [2024-11-26 17:20:07.043503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.981 pt2 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.981 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.246 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.246 "name": "raid_bdev1", 00:23:37.246 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:37.246 "strip_size_kb": 64, 00:23:37.246 "state": "online", 00:23:37.246 "raid_level": "raid0", 00:23:37.246 "superblock": true, 00:23:37.246 "num_base_bdevs": 2, 00:23:37.246 "num_base_bdevs_discovered": 2, 00:23:37.246 "num_base_bdevs_operational": 2, 00:23:37.246 "base_bdevs_list": [ 00:23:37.246 { 00:23:37.246 "name": "pt1", 00:23:37.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:37.246 "is_configured": true, 00:23:37.246 "data_offset": 2048, 00:23:37.246 "data_size": 63488 00:23:37.246 }, 00:23:37.246 { 00:23:37.246 "name": "pt2", 00:23:37.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.246 "is_configured": true, 00:23:37.246 "data_offset": 2048, 00:23:37.246 "data_size": 63488 00:23:37.246 } 00:23:37.246 ] 00:23:37.246 }' 00:23:37.246 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.246 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.506 [2024-11-26 17:20:07.473878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.506 "name": "raid_bdev1", 00:23:37.506 "aliases": [ 00:23:37.506 "d4436575-8048-48c9-9298-04df0cec3ad3" 00:23:37.506 ], 00:23:37.506 "product_name": "Raid Volume", 00:23:37.506 "block_size": 512, 00:23:37.506 "num_blocks": 126976, 00:23:37.506 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:37.506 "assigned_rate_limits": { 00:23:37.506 "rw_ios_per_sec": 0, 00:23:37.506 "rw_mbytes_per_sec": 0, 00:23:37.506 "r_mbytes_per_sec": 0, 00:23:37.506 "w_mbytes_per_sec": 0 00:23:37.506 }, 00:23:37.506 "claimed": false, 00:23:37.506 "zoned": false, 00:23:37.506 "supported_io_types": { 00:23:37.506 "read": true, 00:23:37.506 "write": true, 00:23:37.506 "unmap": true, 00:23:37.506 "flush": true, 00:23:37.506 "reset": true, 00:23:37.506 "nvme_admin": false, 00:23:37.506 "nvme_io": false, 00:23:37.506 "nvme_io_md": false, 00:23:37.506 "write_zeroes": true, 00:23:37.506 "zcopy": false, 00:23:37.506 "get_zone_info": false, 00:23:37.506 "zone_management": false, 00:23:37.506 "zone_append": false, 00:23:37.506 "compare": false, 00:23:37.506 "compare_and_write": false, 00:23:37.506 "abort": false, 00:23:37.506 "seek_hole": false, 00:23:37.506 "seek_data": false, 00:23:37.506 "copy": false, 00:23:37.506 "nvme_iov_md": false 00:23:37.506 }, 00:23:37.506 "memory_domains": [ 00:23:37.506 { 00:23:37.506 "dma_device_id": "system", 00:23:37.506 "dma_device_type": 1 00:23:37.506 }, 00:23:37.506 { 00:23:37.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.506 "dma_device_type": 2 00:23:37.506 }, 00:23:37.506 { 00:23:37.506 "dma_device_id": "system", 00:23:37.506 "dma_device_type": 1 00:23:37.506 }, 00:23:37.506 { 00:23:37.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.506 "dma_device_type": 2 00:23:37.506 } 00:23:37.506 ], 00:23:37.506 "driver_specific": { 00:23:37.506 "raid": { 00:23:37.506 "uuid": "d4436575-8048-48c9-9298-04df0cec3ad3", 00:23:37.506 "strip_size_kb": 64, 00:23:37.506 "state": "online", 00:23:37.506 "raid_level": "raid0", 00:23:37.506 "superblock": true, 00:23:37.506 "num_base_bdevs": 2, 00:23:37.506 "num_base_bdevs_discovered": 2, 00:23:37.506 "num_base_bdevs_operational": 2, 00:23:37.506 "base_bdevs_list": [ 00:23:37.506 { 00:23:37.506 "name": "pt1", 00:23:37.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:37.506 "is_configured": true, 00:23:37.506 "data_offset": 2048, 00:23:37.506 "data_size": 63488 00:23:37.506 }, 00:23:37.506 { 00:23:37.506 "name": "pt2", 00:23:37.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.506 "is_configured": true, 00:23:37.506 "data_offset": 2048, 00:23:37.506 "data_size": 63488 00:23:37.506 } 00:23:37.506 ] 00:23:37.506 } 00:23:37.506 } 00:23:37.506 }' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:37.506 pt2' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:37.506 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:37.507 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.507 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:37.767 [2024-11-26 17:20:07.697870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d4436575-8048-48c9-9298-04df0cec3ad3 '!=' d4436575-8048-48c9-9298-04df0cec3ad3 ']' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61316 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61316 ']' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61316 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61316 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.767 killing process with pid 61316 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61316' 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61316 00:23:37.767 [2024-11-26 17:20:07.786180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.767 17:20:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61316 00:23:37.767 [2024-11-26 17:20:07.786303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.767 [2024-11-26 17:20:07.786361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.767 [2024-11-26 17:20:07.786377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:38.026 [2024-11-26 17:20:08.000366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:39.405 17:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:39.405 00:23:39.405 real 0m4.610s 00:23:39.405 user 0m6.350s 00:23:39.405 sys 0m0.932s 00:23:39.405 17:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.405 ************************************ 00:23:39.405 END TEST raid_superblock_test 00:23:39.405 ************************************ 00:23:39.405 17:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.405 17:20:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:23:39.405 17:20:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:39.405 17:20:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.405 17:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:39.405 ************************************ 00:23:39.405 START TEST raid_read_error_test 00:23:39.405 ************************************ 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ArizuQXW1L 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61528 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61528 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61528 ']' 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.405 17:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.405 [2024-11-26 17:20:09.414336] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:39.405 [2024-11-26 17:20:09.414717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61528 ] 00:23:39.664 [2024-11-26 17:20:09.601065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.664 [2024-11-26 17:20:09.747538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.923 [2024-11-26 17:20:09.982246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.923 [2024-11-26 17:20:09.982324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:40.503 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.503 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:40.503 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:40.503 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:40.503 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 BaseBdev1_malloc 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 true 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 [2024-11-26 17:20:10.381676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:40.504 [2024-11-26 17:20:10.381922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.504 [2024-11-26 17:20:10.381966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:40.504 [2024-11-26 17:20:10.381984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.504 [2024-11-26 17:20:10.384874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.504 [2024-11-26 17:20:10.385040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:40.504 BaseBdev1 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 BaseBdev2_malloc 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 true 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 [2024-11-26 17:20:10.454639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:40.504 [2024-11-26 17:20:10.454715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.504 [2024-11-26 17:20:10.454736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:40.504 [2024-11-26 17:20:10.454751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.504 [2024-11-26 17:20:10.457391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.504 [2024-11-26 17:20:10.457448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:40.504 BaseBdev2 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 [2024-11-26 17:20:10.466683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.504 [2024-11-26 17:20:10.469215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:40.504 [2024-11-26 17:20:10.469433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:40.504 [2024-11-26 17:20:10.469455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:40.504 [2024-11-26 17:20:10.469753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:40.504 [2024-11-26 17:20:10.469945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:40.504 [2024-11-26 17:20:10.470013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:40.504 [2024-11-26 17:20:10.470199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.504 "name": "raid_bdev1", 00:23:40.504 "uuid": "af845940-bea4-46d3-9783-cae0e3ea5aa4", 00:23:40.504 "strip_size_kb": 64, 00:23:40.504 "state": "online", 00:23:40.504 "raid_level": "raid0", 00:23:40.504 "superblock": true, 00:23:40.504 "num_base_bdevs": 2, 00:23:40.504 "num_base_bdevs_discovered": 2, 00:23:40.504 "num_base_bdevs_operational": 2, 00:23:40.504 "base_bdevs_list": [ 00:23:40.504 { 00:23:40.504 "name": "BaseBdev1", 00:23:40.504 "uuid": "7ac434b5-6c53-506e-8ed9-2419385751a6", 00:23:40.504 "is_configured": true, 00:23:40.504 "data_offset": 2048, 00:23:40.504 "data_size": 63488 00:23:40.504 }, 00:23:40.504 { 00:23:40.504 "name": "BaseBdev2", 00:23:40.504 "uuid": "34a07c4b-909c-5217-8b83-c6d693d3fa87", 00:23:40.504 "is_configured": true, 00:23:40.504 "data_offset": 2048, 00:23:40.504 "data_size": 63488 00:23:40.504 } 00:23:40.504 ] 00:23:40.504 }' 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.504 17:20:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.072 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:41.072 17:20:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:41.072 [2024-11-26 17:20:10.987685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.008 "name": "raid_bdev1", 00:23:42.008 "uuid": "af845940-bea4-46d3-9783-cae0e3ea5aa4", 00:23:42.008 "strip_size_kb": 64, 00:23:42.008 "state": "online", 00:23:42.008 "raid_level": "raid0", 00:23:42.008 "superblock": true, 00:23:42.008 "num_base_bdevs": 2, 00:23:42.008 "num_base_bdevs_discovered": 2, 00:23:42.008 "num_base_bdevs_operational": 2, 00:23:42.008 "base_bdevs_list": [ 00:23:42.008 { 00:23:42.008 "name": "BaseBdev1", 00:23:42.008 "uuid": "7ac434b5-6c53-506e-8ed9-2419385751a6", 00:23:42.008 "is_configured": true, 00:23:42.008 "data_offset": 2048, 00:23:42.008 "data_size": 63488 00:23:42.008 }, 00:23:42.008 { 00:23:42.008 "name": "BaseBdev2", 00:23:42.008 "uuid": "34a07c4b-909c-5217-8b83-c6d693d3fa87", 00:23:42.008 "is_configured": true, 00:23:42.008 "data_offset": 2048, 00:23:42.008 "data_size": 63488 00:23:42.008 } 00:23:42.008 ] 00:23:42.008 }' 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.008 17:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.266 [2024-11-26 17:20:12.370814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.266 [2024-11-26 17:20:12.370858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.266 [2024-11-26 17:20:12.373576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.266 [2024-11-26 17:20:12.373770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.266 [2024-11-26 17:20:12.373826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.266 [2024-11-26 17:20:12.373843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:42.266 { 00:23:42.266 "results": [ 00:23:42.266 { 00:23:42.266 "job": "raid_bdev1", 00:23:42.266 "core_mask": "0x1", 00:23:42.266 "workload": "randrw", 00:23:42.266 "percentage": 50, 00:23:42.266 "status": "finished", 00:23:42.266 "queue_depth": 1, 00:23:42.266 "io_size": 131072, 00:23:42.266 "runtime": 1.382842, 00:23:42.266 "iops": 16187.67726175514, 00:23:42.266 "mibps": 2023.4596577193925, 00:23:42.266 "io_failed": 1, 00:23:42.266 "io_timeout": 0, 00:23:42.266 "avg_latency_us": 85.74981394352537, 00:23:42.266 "min_latency_us": 26.730923694779115, 00:23:42.266 "max_latency_us": 1526.5413654618474 00:23:42.266 } 00:23:42.266 ], 00:23:42.266 "core_count": 1 00:23:42.266 } 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61528 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61528 ']' 00:23:42.266 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61528 00:23:42.524 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61528 00:23:42.525 killing process with pid 61528 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61528' 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61528 00:23:42.525 [2024-11-26 17:20:12.423109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:42.525 17:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61528 00:23:42.525 [2024-11-26 17:20:12.563589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.903 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:43.903 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ArizuQXW1L 00:23:43.903 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:43.903 ************************************ 00:23:43.903 END TEST raid_read_error_test 00:23:43.903 ************************************ 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:23:43.904 00:23:43.904 real 0m4.537s 00:23:43.904 user 0m5.380s 00:23:43.904 sys 0m0.669s 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.904 17:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.904 17:20:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:23:43.904 17:20:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:43.904 17:20:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.904 17:20:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:43.904 ************************************ 00:23:43.904 START TEST raid_write_error_test 00:23:43.904 ************************************ 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nDjm1NuPQQ 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61669 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61669 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61669 ']' 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.904 17:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.162 [2024-11-26 17:20:14.089105] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:44.162 [2024-11-26 17:20:14.089242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61669 ] 00:23:44.162 [2024-11-26 17:20:14.274219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.419 [2024-11-26 17:20:14.411706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.736 [2024-11-26 17:20:14.639539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.736 [2024-11-26 17:20:14.639595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.024 BaseBdev1_malloc 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.024 true 00:23:45.024 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 [2024-11-26 17:20:14.987532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:45.025 [2024-11-26 17:20:14.987609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.025 [2024-11-26 17:20:14.987636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:45.025 [2024-11-26 17:20:14.987652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.025 [2024-11-26 17:20:14.990422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.025 [2024-11-26 17:20:14.990475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:45.025 BaseBdev1 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 BaseBdev2_malloc 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 true 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 [2024-11-26 17:20:15.058073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:45.025 [2024-11-26 17:20:15.058131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.025 [2024-11-26 17:20:15.058154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:45.025 [2024-11-26 17:20:15.058172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.025 [2024-11-26 17:20:15.060862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.025 [2024-11-26 17:20:15.060900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:45.025 BaseBdev2 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 [2024-11-26 17:20:15.070143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.025 [2024-11-26 17:20:15.072575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.025 [2024-11-26 17:20:15.072793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:45.025 [2024-11-26 17:20:15.072815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:45.025 [2024-11-26 17:20:15.073119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:45.025 [2024-11-26 17:20:15.073317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:45.025 [2024-11-26 17:20:15.073345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:45.025 [2024-11-26 17:20:15.073558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.025 "name": "raid_bdev1", 00:23:45.025 "uuid": "832a3ec2-b18d-4ec1-a2db-5aec8b027904", 00:23:45.025 "strip_size_kb": 64, 00:23:45.025 "state": "online", 00:23:45.025 "raid_level": "raid0", 00:23:45.025 "superblock": true, 00:23:45.025 "num_base_bdevs": 2, 00:23:45.025 "num_base_bdevs_discovered": 2, 00:23:45.025 "num_base_bdevs_operational": 2, 00:23:45.025 "base_bdevs_list": [ 00:23:45.025 { 00:23:45.025 "name": "BaseBdev1", 00:23:45.025 "uuid": "c9b79cb1-d0be-5944-ab98-ea5f0a805bf8", 00:23:45.025 "is_configured": true, 00:23:45.025 "data_offset": 2048, 00:23:45.025 "data_size": 63488 00:23:45.025 }, 00:23:45.025 { 00:23:45.025 "name": "BaseBdev2", 00:23:45.025 "uuid": "460a19f2-6a61-5ac5-ad02-95443291caea", 00:23:45.025 "is_configured": true, 00:23:45.025 "data_offset": 2048, 00:23:45.025 "data_size": 63488 00:23:45.025 } 00:23:45.025 ] 00:23:45.025 }' 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.025 17:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.592 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:45.592 17:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:45.592 [2024-11-26 17:20:15.551037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.529 "name": "raid_bdev1", 00:23:46.529 "uuid": "832a3ec2-b18d-4ec1-a2db-5aec8b027904", 00:23:46.529 "strip_size_kb": 64, 00:23:46.529 "state": "online", 00:23:46.529 "raid_level": "raid0", 00:23:46.529 "superblock": true, 00:23:46.529 "num_base_bdevs": 2, 00:23:46.529 "num_base_bdevs_discovered": 2, 00:23:46.529 "num_base_bdevs_operational": 2, 00:23:46.529 "base_bdevs_list": [ 00:23:46.529 { 00:23:46.529 "name": "BaseBdev1", 00:23:46.529 "uuid": "c9b79cb1-d0be-5944-ab98-ea5f0a805bf8", 00:23:46.529 "is_configured": true, 00:23:46.529 "data_offset": 2048, 00:23:46.529 "data_size": 63488 00:23:46.529 }, 00:23:46.529 { 00:23:46.529 "name": "BaseBdev2", 00:23:46.529 "uuid": "460a19f2-6a61-5ac5-ad02-95443291caea", 00:23:46.529 "is_configured": true, 00:23:46.529 "data_offset": 2048, 00:23:46.529 "data_size": 63488 00:23:46.529 } 00:23:46.529 ] 00:23:46.529 }' 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.529 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.788 [2024-11-26 17:20:16.885611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.788 [2024-11-26 17:20:16.885823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.788 [2024-11-26 17:20:16.888762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.788 [2024-11-26 17:20:16.888808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.788 [2024-11-26 17:20:16.888846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.788 [2024-11-26 17:20:16.888861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:46.788 { 00:23:46.788 "results": [ 00:23:46.788 { 00:23:46.788 "job": "raid_bdev1", 00:23:46.788 "core_mask": "0x1", 00:23:46.788 "workload": "randrw", 00:23:46.788 "percentage": 50, 00:23:46.788 "status": "finished", 00:23:46.788 "queue_depth": 1, 00:23:46.788 "io_size": 131072, 00:23:46.788 "runtime": 1.334708, 00:23:46.788 "iops": 16255.240846687066, 00:23:46.788 "mibps": 2031.9051058358832, 00:23:46.788 "io_failed": 1, 00:23:46.788 "io_timeout": 0, 00:23:46.788 "avg_latency_us": 85.05815913328384, 00:23:46.788 "min_latency_us": 26.730923694779115, 00:23:46.788 "max_latency_us": 1539.701204819277 00:23:46.788 } 00:23:46.788 ], 00:23:46.788 "core_count": 1 00:23:46.788 } 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61669 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61669 ']' 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61669 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.788 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61669 00:23:47.046 killing process with pid 61669 00:23:47.046 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.046 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.046 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61669' 00:23:47.046 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61669 00:23:47.046 [2024-11-26 17:20:16.937688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:47.046 17:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61669 00:23:47.046 [2024-11-26 17:20:17.077107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nDjm1NuPQQ 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:23:48.423 ************************************ 00:23:48.423 END TEST raid_write_error_test 00:23:48.423 ************************************ 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:48.423 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:48.424 17:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:23:48.424 00:23:48.424 real 0m4.380s 00:23:48.424 user 0m5.066s 00:23:48.424 sys 0m0.654s 00:23:48.424 17:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.424 17:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.424 17:20:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:48.424 17:20:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:23:48.424 17:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:48.424 17:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.424 17:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.424 ************************************ 00:23:48.424 START TEST raid_state_function_test 00:23:48.424 ************************************ 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:48.424 Process raid pid: 61811 00:23:48.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61811 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61811' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61811 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61811 ']' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.424 17:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.424 [2024-11-26 17:20:18.525853] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:48.424 [2024-11-26 17:20:18.526192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.683 [2024-11-26 17:20:18.699800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.942 [2024-11-26 17:20:18.847033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.201 [2024-11-26 17:20:19.068327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.201 [2024-11-26 17:20:19.068595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.459 [2024-11-26 17:20:19.370064] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.459 [2024-11-26 17:20:19.370136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.459 [2024-11-26 17:20:19.370149] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.459 [2024-11-26 17:20:19.370164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.459 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.459 "name": "Existed_Raid", 00:23:49.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.459 "strip_size_kb": 64, 00:23:49.459 "state": "configuring", 00:23:49.459 "raid_level": "concat", 00:23:49.459 "superblock": false, 00:23:49.459 "num_base_bdevs": 2, 00:23:49.459 "num_base_bdevs_discovered": 0, 00:23:49.459 "num_base_bdevs_operational": 2, 00:23:49.459 "base_bdevs_list": [ 00:23:49.459 { 00:23:49.459 "name": "BaseBdev1", 00:23:49.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.459 "is_configured": false, 00:23:49.459 "data_offset": 0, 00:23:49.459 "data_size": 0 00:23:49.459 }, 00:23:49.459 { 00:23:49.459 "name": "BaseBdev2", 00:23:49.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.459 "is_configured": false, 00:23:49.459 "data_offset": 0, 00:23:49.459 "data_size": 0 00:23:49.459 } 00:23:49.460 ] 00:23:49.460 }' 00:23:49.460 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.460 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.718 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:49.718 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.718 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.718 [2024-11-26 17:20:19.829647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:49.719 [2024-11-26 17:20:19.829701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.978 [2024-11-26 17:20:19.841600] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.978 [2024-11-26 17:20:19.841659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.978 [2024-11-26 17:20:19.841671] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.978 [2024-11-26 17:20:19.841688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.978 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.979 [2024-11-26 17:20:19.893301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.979 BaseBdev1 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.979 [ 00:23:49.979 { 00:23:49.979 "name": "BaseBdev1", 00:23:49.979 "aliases": [ 00:23:49.979 "792c9e3b-de96-4813-8c02-ccc645bf3953" 00:23:49.979 ], 00:23:49.979 "product_name": "Malloc disk", 00:23:49.979 "block_size": 512, 00:23:49.979 "num_blocks": 65536, 00:23:49.979 "uuid": "792c9e3b-de96-4813-8c02-ccc645bf3953", 00:23:49.979 "assigned_rate_limits": { 00:23:49.979 "rw_ios_per_sec": 0, 00:23:49.979 "rw_mbytes_per_sec": 0, 00:23:49.979 "r_mbytes_per_sec": 0, 00:23:49.979 "w_mbytes_per_sec": 0 00:23:49.979 }, 00:23:49.979 "claimed": true, 00:23:49.979 "claim_type": "exclusive_write", 00:23:49.979 "zoned": false, 00:23:49.979 "supported_io_types": { 00:23:49.979 "read": true, 00:23:49.979 "write": true, 00:23:49.979 "unmap": true, 00:23:49.979 "flush": true, 00:23:49.979 "reset": true, 00:23:49.979 "nvme_admin": false, 00:23:49.979 "nvme_io": false, 00:23:49.979 "nvme_io_md": false, 00:23:49.979 "write_zeroes": true, 00:23:49.979 "zcopy": true, 00:23:49.979 "get_zone_info": false, 00:23:49.979 "zone_management": false, 00:23:49.979 "zone_append": false, 00:23:49.979 "compare": false, 00:23:49.979 "compare_and_write": false, 00:23:49.979 "abort": true, 00:23:49.979 "seek_hole": false, 00:23:49.979 "seek_data": false, 00:23:49.979 "copy": true, 00:23:49.979 "nvme_iov_md": false 00:23:49.979 }, 00:23:49.979 "memory_domains": [ 00:23:49.979 { 00:23:49.979 "dma_device_id": "system", 00:23:49.979 "dma_device_type": 1 00:23:49.979 }, 00:23:49.979 { 00:23:49.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.979 "dma_device_type": 2 00:23:49.979 } 00:23:49.979 ], 00:23:49.979 "driver_specific": {} 00:23:49.979 } 00:23:49.979 ] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.979 "name": "Existed_Raid", 00:23:49.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.979 "strip_size_kb": 64, 00:23:49.979 "state": "configuring", 00:23:49.979 "raid_level": "concat", 00:23:49.979 "superblock": false, 00:23:49.979 "num_base_bdevs": 2, 00:23:49.979 "num_base_bdevs_discovered": 1, 00:23:49.979 "num_base_bdevs_operational": 2, 00:23:49.979 "base_bdevs_list": [ 00:23:49.979 { 00:23:49.979 "name": "BaseBdev1", 00:23:49.979 "uuid": "792c9e3b-de96-4813-8c02-ccc645bf3953", 00:23:49.979 "is_configured": true, 00:23:49.979 "data_offset": 0, 00:23:49.979 "data_size": 65536 00:23:49.979 }, 00:23:49.979 { 00:23:49.979 "name": "BaseBdev2", 00:23:49.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.979 "is_configured": false, 00:23:49.979 "data_offset": 0, 00:23:49.979 "data_size": 0 00:23:49.979 } 00:23:49.979 ] 00:23:49.979 }' 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.979 17:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.238 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:50.238 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.238 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.238 [2024-11-26 17:20:20.324757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:50.238 [2024-11-26 17:20:20.324827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:50.238 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.239 [2024-11-26 17:20:20.332772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.239 [2024-11-26 17:20:20.335163] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:50.239 [2024-11-26 17:20:20.335216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.239 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.497 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.497 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.497 "name": "Existed_Raid", 00:23:50.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.497 "strip_size_kb": 64, 00:23:50.497 "state": "configuring", 00:23:50.497 "raid_level": "concat", 00:23:50.497 "superblock": false, 00:23:50.497 "num_base_bdevs": 2, 00:23:50.497 "num_base_bdevs_discovered": 1, 00:23:50.497 "num_base_bdevs_operational": 2, 00:23:50.497 "base_bdevs_list": [ 00:23:50.497 { 00:23:50.497 "name": "BaseBdev1", 00:23:50.497 "uuid": "792c9e3b-de96-4813-8c02-ccc645bf3953", 00:23:50.497 "is_configured": true, 00:23:50.497 "data_offset": 0, 00:23:50.497 "data_size": 65536 00:23:50.497 }, 00:23:50.497 { 00:23:50.497 "name": "BaseBdev2", 00:23:50.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.498 "is_configured": false, 00:23:50.498 "data_offset": 0, 00:23:50.498 "data_size": 0 00:23:50.498 } 00:23:50.498 ] 00:23:50.498 }' 00:23:50.498 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.498 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.757 [2024-11-26 17:20:20.807170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:50.757 [2024-11-26 17:20:20.807237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:50.757 [2024-11-26 17:20:20.807248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:50.757 [2024-11-26 17:20:20.807578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:50.757 [2024-11-26 17:20:20.807778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:50.757 [2024-11-26 17:20:20.807801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:50.757 [2024-11-26 17:20:20.808093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.757 BaseBdev2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.757 [ 00:23:50.757 { 00:23:50.757 "name": "BaseBdev2", 00:23:50.757 "aliases": [ 00:23:50.757 "b1f12c83-3dea-41a6-930d-a332fc3dfa95" 00:23:50.757 ], 00:23:50.757 "product_name": "Malloc disk", 00:23:50.757 "block_size": 512, 00:23:50.757 "num_blocks": 65536, 00:23:50.757 "uuid": "b1f12c83-3dea-41a6-930d-a332fc3dfa95", 00:23:50.757 "assigned_rate_limits": { 00:23:50.757 "rw_ios_per_sec": 0, 00:23:50.757 "rw_mbytes_per_sec": 0, 00:23:50.757 "r_mbytes_per_sec": 0, 00:23:50.757 "w_mbytes_per_sec": 0 00:23:50.757 }, 00:23:50.757 "claimed": true, 00:23:50.757 "claim_type": "exclusive_write", 00:23:50.757 "zoned": false, 00:23:50.757 "supported_io_types": { 00:23:50.757 "read": true, 00:23:50.757 "write": true, 00:23:50.757 "unmap": true, 00:23:50.757 "flush": true, 00:23:50.757 "reset": true, 00:23:50.757 "nvme_admin": false, 00:23:50.757 "nvme_io": false, 00:23:50.757 "nvme_io_md": false, 00:23:50.757 "write_zeroes": true, 00:23:50.757 "zcopy": true, 00:23:50.757 "get_zone_info": false, 00:23:50.757 "zone_management": false, 00:23:50.757 "zone_append": false, 00:23:50.757 "compare": false, 00:23:50.757 "compare_and_write": false, 00:23:50.757 "abort": true, 00:23:50.757 "seek_hole": false, 00:23:50.757 "seek_data": false, 00:23:50.757 "copy": true, 00:23:50.757 "nvme_iov_md": false 00:23:50.757 }, 00:23:50.757 "memory_domains": [ 00:23:50.757 { 00:23:50.757 "dma_device_id": "system", 00:23:50.757 "dma_device_type": 1 00:23:50.757 }, 00:23:50.757 { 00:23:50.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.757 "dma_device_type": 2 00:23:50.757 } 00:23:50.757 ], 00:23:50.757 "driver_specific": {} 00:23:50.757 } 00:23:50.757 ] 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.757 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.017 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.017 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.017 "name": "Existed_Raid", 00:23:51.017 "uuid": "7d58ec49-86fe-4164-95f7-589ca07c5dc1", 00:23:51.017 "strip_size_kb": 64, 00:23:51.017 "state": "online", 00:23:51.017 "raid_level": "concat", 00:23:51.017 "superblock": false, 00:23:51.017 "num_base_bdevs": 2, 00:23:51.017 "num_base_bdevs_discovered": 2, 00:23:51.017 "num_base_bdevs_operational": 2, 00:23:51.017 "base_bdevs_list": [ 00:23:51.017 { 00:23:51.017 "name": "BaseBdev1", 00:23:51.017 "uuid": "792c9e3b-de96-4813-8c02-ccc645bf3953", 00:23:51.017 "is_configured": true, 00:23:51.017 "data_offset": 0, 00:23:51.017 "data_size": 65536 00:23:51.017 }, 00:23:51.017 { 00:23:51.017 "name": "BaseBdev2", 00:23:51.017 "uuid": "b1f12c83-3dea-41a6-930d-a332fc3dfa95", 00:23:51.017 "is_configured": true, 00:23:51.017 "data_offset": 0, 00:23:51.017 "data_size": 65536 00:23:51.017 } 00:23:51.017 ] 00:23:51.017 }' 00:23:51.017 17:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.017 17:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 [2024-11-26 17:20:21.306973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.277 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.277 "name": "Existed_Raid", 00:23:51.277 "aliases": [ 00:23:51.277 "7d58ec49-86fe-4164-95f7-589ca07c5dc1" 00:23:51.277 ], 00:23:51.277 "product_name": "Raid Volume", 00:23:51.277 "block_size": 512, 00:23:51.277 "num_blocks": 131072, 00:23:51.277 "uuid": "7d58ec49-86fe-4164-95f7-589ca07c5dc1", 00:23:51.277 "assigned_rate_limits": { 00:23:51.277 "rw_ios_per_sec": 0, 00:23:51.277 "rw_mbytes_per_sec": 0, 00:23:51.277 "r_mbytes_per_sec": 0, 00:23:51.277 "w_mbytes_per_sec": 0 00:23:51.277 }, 00:23:51.277 "claimed": false, 00:23:51.277 "zoned": false, 00:23:51.277 "supported_io_types": { 00:23:51.277 "read": true, 00:23:51.277 "write": true, 00:23:51.277 "unmap": true, 00:23:51.277 "flush": true, 00:23:51.277 "reset": true, 00:23:51.277 "nvme_admin": false, 00:23:51.277 "nvme_io": false, 00:23:51.277 "nvme_io_md": false, 00:23:51.277 "write_zeroes": true, 00:23:51.277 "zcopy": false, 00:23:51.277 "get_zone_info": false, 00:23:51.277 "zone_management": false, 00:23:51.277 "zone_append": false, 00:23:51.277 "compare": false, 00:23:51.277 "compare_and_write": false, 00:23:51.277 "abort": false, 00:23:51.277 "seek_hole": false, 00:23:51.277 "seek_data": false, 00:23:51.277 "copy": false, 00:23:51.277 "nvme_iov_md": false 00:23:51.277 }, 00:23:51.277 "memory_domains": [ 00:23:51.277 { 00:23:51.277 "dma_device_id": "system", 00:23:51.277 "dma_device_type": 1 00:23:51.277 }, 00:23:51.277 { 00:23:51.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.277 "dma_device_type": 2 00:23:51.277 }, 00:23:51.277 { 00:23:51.277 "dma_device_id": "system", 00:23:51.277 "dma_device_type": 1 00:23:51.277 }, 00:23:51.277 { 00:23:51.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.277 "dma_device_type": 2 00:23:51.277 } 00:23:51.277 ], 00:23:51.277 "driver_specific": { 00:23:51.277 "raid": { 00:23:51.277 "uuid": "7d58ec49-86fe-4164-95f7-589ca07c5dc1", 00:23:51.277 "strip_size_kb": 64, 00:23:51.277 "state": "online", 00:23:51.277 "raid_level": "concat", 00:23:51.277 "superblock": false, 00:23:51.277 "num_base_bdevs": 2, 00:23:51.277 "num_base_bdevs_discovered": 2, 00:23:51.277 "num_base_bdevs_operational": 2, 00:23:51.277 "base_bdevs_list": [ 00:23:51.277 { 00:23:51.277 "name": "BaseBdev1", 00:23:51.277 "uuid": "792c9e3b-de96-4813-8c02-ccc645bf3953", 00:23:51.277 "is_configured": true, 00:23:51.278 "data_offset": 0, 00:23:51.278 "data_size": 65536 00:23:51.278 }, 00:23:51.278 { 00:23:51.278 "name": "BaseBdev2", 00:23:51.278 "uuid": "b1f12c83-3dea-41a6-930d-a332fc3dfa95", 00:23:51.278 "is_configured": true, 00:23:51.278 "data_offset": 0, 00:23:51.278 "data_size": 65536 00:23:51.278 } 00:23:51.278 ] 00:23:51.278 } 00:23:51.278 } 00:23:51.278 }' 00:23:51.278 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:51.278 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:51.278 BaseBdev2' 00:23:51.278 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.537 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:51.537 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.537 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.538 [2024-11-26 17:20:21.530476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:51.538 [2024-11-26 17:20:21.530536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.538 [2024-11-26 17:20:21.530620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.538 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.810 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.810 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.810 "name": "Existed_Raid", 00:23:51.810 "uuid": "7d58ec49-86fe-4164-95f7-589ca07c5dc1", 00:23:51.810 "strip_size_kb": 64, 00:23:51.810 "state": "offline", 00:23:51.810 "raid_level": "concat", 00:23:51.810 "superblock": false, 00:23:51.810 "num_base_bdevs": 2, 00:23:51.810 "num_base_bdevs_discovered": 1, 00:23:51.810 "num_base_bdevs_operational": 1, 00:23:51.810 "base_bdevs_list": [ 00:23:51.810 { 00:23:51.810 "name": null, 00:23:51.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.810 "is_configured": false, 00:23:51.810 "data_offset": 0, 00:23:51.810 "data_size": 65536 00:23:51.810 }, 00:23:51.810 { 00:23:51.810 "name": "BaseBdev2", 00:23:51.810 "uuid": "b1f12c83-3dea-41a6-930d-a332fc3dfa95", 00:23:51.810 "is_configured": true, 00:23:51.810 "data_offset": 0, 00:23:51.810 "data_size": 65536 00:23:51.810 } 00:23:51.810 ] 00:23:51.810 }' 00:23:51.811 17:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.811 17:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.069 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.069 [2024-11-26 17:20:22.094659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:52.069 [2024-11-26 17:20:22.094730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61811 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61811 ']' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61811 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61811 00:23:52.329 killing process with pid 61811 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61811' 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61811 00:23:52.329 [2024-11-26 17:20:22.284685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.329 17:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61811 00:23:52.329 [2024-11-26 17:20:22.302309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:53.810 00:23:53.810 real 0m5.069s 00:23:53.810 user 0m7.166s 00:23:53.810 sys 0m0.982s 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.810 ************************************ 00:23:53.810 END TEST raid_state_function_test 00:23:53.810 ************************************ 00:23:53.810 17:20:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:23:53.810 17:20:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.810 17:20:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.810 17:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:53.810 ************************************ 00:23:53.810 START TEST raid_state_function_test_sb 00:23:53.810 ************************************ 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62059 00:23:53.810 Process raid pid: 62059 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62059' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62059 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62059 ']' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.810 17:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.810 [2024-11-26 17:20:23.674818] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:53.810 [2024-11-26 17:20:23.674962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.810 [2024-11-26 17:20:23.859783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.068 [2024-11-26 17:20:23.998032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.326 [2024-11-26 17:20:24.201507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.326 [2024-11-26 17:20:24.201574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.583 [2024-11-26 17:20:24.514291] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:54.583 [2024-11-26 17:20:24.514355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:54.583 [2024-11-26 17:20:24.514368] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:54.583 [2024-11-26 17:20:24.514382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.583 "name": "Existed_Raid", 00:23:54.583 "uuid": "ddea14b3-b75a-476c-ba5c-457129a10a4f", 00:23:54.583 "strip_size_kb": 64, 00:23:54.583 "state": "configuring", 00:23:54.583 "raid_level": "concat", 00:23:54.583 "superblock": true, 00:23:54.583 "num_base_bdevs": 2, 00:23:54.583 "num_base_bdevs_discovered": 0, 00:23:54.583 "num_base_bdevs_operational": 2, 00:23:54.583 "base_bdevs_list": [ 00:23:54.583 { 00:23:54.583 "name": "BaseBdev1", 00:23:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.583 "is_configured": false, 00:23:54.583 "data_offset": 0, 00:23:54.583 "data_size": 0 00:23:54.583 }, 00:23:54.583 { 00:23:54.583 "name": "BaseBdev2", 00:23:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.583 "is_configured": false, 00:23:54.583 "data_offset": 0, 00:23:54.583 "data_size": 0 00:23:54.583 } 00:23:54.583 ] 00:23:54.583 }' 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.583 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.841 [2024-11-26 17:20:24.937713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:54.841 [2024-11-26 17:20:24.937761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.841 [2024-11-26 17:20:24.945701] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:54.841 [2024-11-26 17:20:24.945759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:54.841 [2024-11-26 17:20:24.945771] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:54.841 [2024-11-26 17:20:24.945788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.841 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 [2024-11-26 17:20:24.993994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.101 BaseBdev1 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.101 17:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 [ 00:23:55.101 { 00:23:55.101 "name": "BaseBdev1", 00:23:55.101 "aliases": [ 00:23:55.101 "5c4691ee-0172-4c70-b6ad-61e8803ddfb9" 00:23:55.101 ], 00:23:55.101 "product_name": "Malloc disk", 00:23:55.101 "block_size": 512, 00:23:55.101 "num_blocks": 65536, 00:23:55.101 "uuid": "5c4691ee-0172-4c70-b6ad-61e8803ddfb9", 00:23:55.101 "assigned_rate_limits": { 00:23:55.101 "rw_ios_per_sec": 0, 00:23:55.101 "rw_mbytes_per_sec": 0, 00:23:55.101 "r_mbytes_per_sec": 0, 00:23:55.101 "w_mbytes_per_sec": 0 00:23:55.101 }, 00:23:55.101 "claimed": true, 00:23:55.101 "claim_type": "exclusive_write", 00:23:55.101 "zoned": false, 00:23:55.101 "supported_io_types": { 00:23:55.101 "read": true, 00:23:55.101 "write": true, 00:23:55.101 "unmap": true, 00:23:55.101 "flush": true, 00:23:55.101 "reset": true, 00:23:55.101 "nvme_admin": false, 00:23:55.101 "nvme_io": false, 00:23:55.101 "nvme_io_md": false, 00:23:55.101 "write_zeroes": true, 00:23:55.101 "zcopy": true, 00:23:55.101 "get_zone_info": false, 00:23:55.101 "zone_management": false, 00:23:55.101 "zone_append": false, 00:23:55.101 "compare": false, 00:23:55.101 "compare_and_write": false, 00:23:55.101 "abort": true, 00:23:55.101 "seek_hole": false, 00:23:55.101 "seek_data": false, 00:23:55.101 "copy": true, 00:23:55.101 "nvme_iov_md": false 00:23:55.101 }, 00:23:55.101 "memory_domains": [ 00:23:55.101 { 00:23:55.101 "dma_device_id": "system", 00:23:55.101 "dma_device_type": 1 00:23:55.101 }, 00:23:55.101 { 00:23:55.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.101 "dma_device_type": 2 00:23:55.101 } 00:23:55.101 ], 00:23:55.101 "driver_specific": {} 00:23:55.101 } 00:23:55.101 ] 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.101 "name": "Existed_Raid", 00:23:55.101 "uuid": "ea361e22-3cd2-4a2b-a3da-8ba18d54d024", 00:23:55.101 "strip_size_kb": 64, 00:23:55.101 "state": "configuring", 00:23:55.101 "raid_level": "concat", 00:23:55.101 "superblock": true, 00:23:55.101 "num_base_bdevs": 2, 00:23:55.101 "num_base_bdevs_discovered": 1, 00:23:55.101 "num_base_bdevs_operational": 2, 00:23:55.101 "base_bdevs_list": [ 00:23:55.101 { 00:23:55.101 "name": "BaseBdev1", 00:23:55.101 "uuid": "5c4691ee-0172-4c70-b6ad-61e8803ddfb9", 00:23:55.101 "is_configured": true, 00:23:55.101 "data_offset": 2048, 00:23:55.101 "data_size": 63488 00:23:55.101 }, 00:23:55.101 { 00:23:55.101 "name": "BaseBdev2", 00:23:55.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.101 "is_configured": false, 00:23:55.101 "data_offset": 0, 00:23:55.101 "data_size": 0 00:23:55.101 } 00:23:55.101 ] 00:23:55.101 }' 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.101 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.361 [2024-11-26 17:20:25.445698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.361 [2024-11-26 17:20:25.445778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.361 [2024-11-26 17:20:25.457792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.361 [2024-11-26 17:20:25.460110] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:55.361 [2024-11-26 17:20:25.460173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.361 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.621 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.621 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.621 "name": "Existed_Raid", 00:23:55.621 "uuid": "60a12440-3c9c-4a59-8bc6-43b1dd226d08", 00:23:55.621 "strip_size_kb": 64, 00:23:55.621 "state": "configuring", 00:23:55.621 "raid_level": "concat", 00:23:55.621 "superblock": true, 00:23:55.621 "num_base_bdevs": 2, 00:23:55.621 "num_base_bdevs_discovered": 1, 00:23:55.621 "num_base_bdevs_operational": 2, 00:23:55.621 "base_bdevs_list": [ 00:23:55.621 { 00:23:55.621 "name": "BaseBdev1", 00:23:55.621 "uuid": "5c4691ee-0172-4c70-b6ad-61e8803ddfb9", 00:23:55.621 "is_configured": true, 00:23:55.621 "data_offset": 2048, 00:23:55.621 "data_size": 63488 00:23:55.621 }, 00:23:55.621 { 00:23:55.621 "name": "BaseBdev2", 00:23:55.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.621 "is_configured": false, 00:23:55.621 "data_offset": 0, 00:23:55.621 "data_size": 0 00:23:55.621 } 00:23:55.621 ] 00:23:55.621 }' 00:23:55.621 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.621 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.880 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:55.880 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.880 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.880 [2024-11-26 17:20:25.923130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:55.880 [2024-11-26 17:20:25.923433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:55.880 [2024-11-26 17:20:25.923452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:55.880 [2024-11-26 17:20:25.923766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:55.880 [2024-11-26 17:20:25.923940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:55.880 [2024-11-26 17:20:25.923957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:55.880 BaseBdev2 00:23:55.880 [2024-11-26 17:20:25.924108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.880 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.881 [ 00:23:55.881 { 00:23:55.881 "name": "BaseBdev2", 00:23:55.881 "aliases": [ 00:23:55.881 "85e5c889-6188-45c1-aa77-c54ec3841408" 00:23:55.881 ], 00:23:55.881 "product_name": "Malloc disk", 00:23:55.881 "block_size": 512, 00:23:55.881 "num_blocks": 65536, 00:23:55.881 "uuid": "85e5c889-6188-45c1-aa77-c54ec3841408", 00:23:55.881 "assigned_rate_limits": { 00:23:55.881 "rw_ios_per_sec": 0, 00:23:55.881 "rw_mbytes_per_sec": 0, 00:23:55.881 "r_mbytes_per_sec": 0, 00:23:55.881 "w_mbytes_per_sec": 0 00:23:55.881 }, 00:23:55.881 "claimed": true, 00:23:55.881 "claim_type": "exclusive_write", 00:23:55.881 "zoned": false, 00:23:55.881 "supported_io_types": { 00:23:55.881 "read": true, 00:23:55.881 "write": true, 00:23:55.881 "unmap": true, 00:23:55.881 "flush": true, 00:23:55.881 "reset": true, 00:23:55.881 "nvme_admin": false, 00:23:55.881 "nvme_io": false, 00:23:55.881 "nvme_io_md": false, 00:23:55.881 "write_zeroes": true, 00:23:55.881 "zcopy": true, 00:23:55.881 "get_zone_info": false, 00:23:55.881 "zone_management": false, 00:23:55.881 "zone_append": false, 00:23:55.881 "compare": false, 00:23:55.881 "compare_and_write": false, 00:23:55.881 "abort": true, 00:23:55.881 "seek_hole": false, 00:23:55.881 "seek_data": false, 00:23:55.881 "copy": true, 00:23:55.881 "nvme_iov_md": false 00:23:55.881 }, 00:23:55.881 "memory_domains": [ 00:23:55.881 { 00:23:55.881 "dma_device_id": "system", 00:23:55.881 "dma_device_type": 1 00:23:55.881 }, 00:23:55.881 { 00:23:55.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.881 "dma_device_type": 2 00:23:55.881 } 00:23:55.881 ], 00:23:55.881 "driver_specific": {} 00:23:55.881 } 00:23:55.881 ] 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.881 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.140 17:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.140 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.140 "name": "Existed_Raid", 00:23:56.140 "uuid": "60a12440-3c9c-4a59-8bc6-43b1dd226d08", 00:23:56.140 "strip_size_kb": 64, 00:23:56.140 "state": "online", 00:23:56.140 "raid_level": "concat", 00:23:56.140 "superblock": true, 00:23:56.140 "num_base_bdevs": 2, 00:23:56.140 "num_base_bdevs_discovered": 2, 00:23:56.140 "num_base_bdevs_operational": 2, 00:23:56.140 "base_bdevs_list": [ 00:23:56.140 { 00:23:56.140 "name": "BaseBdev1", 00:23:56.140 "uuid": "5c4691ee-0172-4c70-b6ad-61e8803ddfb9", 00:23:56.140 "is_configured": true, 00:23:56.140 "data_offset": 2048, 00:23:56.140 "data_size": 63488 00:23:56.140 }, 00:23:56.140 { 00:23:56.140 "name": "BaseBdev2", 00:23:56.140 "uuid": "85e5c889-6188-45c1-aa77-c54ec3841408", 00:23:56.140 "is_configured": true, 00:23:56.140 "data_offset": 2048, 00:23:56.140 "data_size": 63488 00:23:56.140 } 00:23:56.140 ] 00:23:56.140 }' 00:23:56.140 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.140 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.399 [2024-11-26 17:20:26.414871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.399 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.399 "name": "Existed_Raid", 00:23:56.399 "aliases": [ 00:23:56.399 "60a12440-3c9c-4a59-8bc6-43b1dd226d08" 00:23:56.399 ], 00:23:56.399 "product_name": "Raid Volume", 00:23:56.399 "block_size": 512, 00:23:56.399 "num_blocks": 126976, 00:23:56.399 "uuid": "60a12440-3c9c-4a59-8bc6-43b1dd226d08", 00:23:56.399 "assigned_rate_limits": { 00:23:56.399 "rw_ios_per_sec": 0, 00:23:56.399 "rw_mbytes_per_sec": 0, 00:23:56.399 "r_mbytes_per_sec": 0, 00:23:56.399 "w_mbytes_per_sec": 0 00:23:56.399 }, 00:23:56.399 "claimed": false, 00:23:56.399 "zoned": false, 00:23:56.399 "supported_io_types": { 00:23:56.399 "read": true, 00:23:56.399 "write": true, 00:23:56.399 "unmap": true, 00:23:56.399 "flush": true, 00:23:56.399 "reset": true, 00:23:56.399 "nvme_admin": false, 00:23:56.399 "nvme_io": false, 00:23:56.399 "nvme_io_md": false, 00:23:56.399 "write_zeroes": true, 00:23:56.399 "zcopy": false, 00:23:56.399 "get_zone_info": false, 00:23:56.399 "zone_management": false, 00:23:56.399 "zone_append": false, 00:23:56.400 "compare": false, 00:23:56.400 "compare_and_write": false, 00:23:56.400 "abort": false, 00:23:56.400 "seek_hole": false, 00:23:56.400 "seek_data": false, 00:23:56.400 "copy": false, 00:23:56.400 "nvme_iov_md": false 00:23:56.400 }, 00:23:56.400 "memory_domains": [ 00:23:56.400 { 00:23:56.400 "dma_device_id": "system", 00:23:56.400 "dma_device_type": 1 00:23:56.400 }, 00:23:56.400 { 00:23:56.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.400 "dma_device_type": 2 00:23:56.400 }, 00:23:56.400 { 00:23:56.400 "dma_device_id": "system", 00:23:56.400 "dma_device_type": 1 00:23:56.400 }, 00:23:56.400 { 00:23:56.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.400 "dma_device_type": 2 00:23:56.400 } 00:23:56.400 ], 00:23:56.400 "driver_specific": { 00:23:56.400 "raid": { 00:23:56.400 "uuid": "60a12440-3c9c-4a59-8bc6-43b1dd226d08", 00:23:56.400 "strip_size_kb": 64, 00:23:56.400 "state": "online", 00:23:56.400 "raid_level": "concat", 00:23:56.400 "superblock": true, 00:23:56.400 "num_base_bdevs": 2, 00:23:56.400 "num_base_bdevs_discovered": 2, 00:23:56.400 "num_base_bdevs_operational": 2, 00:23:56.400 "base_bdevs_list": [ 00:23:56.400 { 00:23:56.400 "name": "BaseBdev1", 00:23:56.400 "uuid": "5c4691ee-0172-4c70-b6ad-61e8803ddfb9", 00:23:56.400 "is_configured": true, 00:23:56.400 "data_offset": 2048, 00:23:56.400 "data_size": 63488 00:23:56.400 }, 00:23:56.400 { 00:23:56.400 "name": "BaseBdev2", 00:23:56.400 "uuid": "85e5c889-6188-45c1-aa77-c54ec3841408", 00:23:56.400 "is_configured": true, 00:23:56.400 "data_offset": 2048, 00:23:56.400 "data_size": 63488 00:23:56.400 } 00:23:56.400 ] 00:23:56.400 } 00:23:56.400 } 00:23:56.400 }' 00:23:56.400 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.400 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:56.400 BaseBdev2' 00:23:56.400 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.687 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.688 [2024-11-26 17:20:26.634328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:56.688 [2024-11-26 17:20:26.634376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.688 [2024-11-26 17:20:26.634440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.688 "name": "Existed_Raid", 00:23:56.688 "uuid": "60a12440-3c9c-4a59-8bc6-43b1dd226d08", 00:23:56.688 "strip_size_kb": 64, 00:23:56.688 "state": "offline", 00:23:56.688 "raid_level": "concat", 00:23:56.688 "superblock": true, 00:23:56.688 "num_base_bdevs": 2, 00:23:56.688 "num_base_bdevs_discovered": 1, 00:23:56.688 "num_base_bdevs_operational": 1, 00:23:56.688 "base_bdevs_list": [ 00:23:56.688 { 00:23:56.688 "name": null, 00:23:56.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.688 "is_configured": false, 00:23:56.688 "data_offset": 0, 00:23:56.688 "data_size": 63488 00:23:56.688 }, 00:23:56.688 { 00:23:56.688 "name": "BaseBdev2", 00:23:56.688 "uuid": "85e5c889-6188-45c1-aa77-c54ec3841408", 00:23:56.688 "is_configured": true, 00:23:56.688 "data_offset": 2048, 00:23:56.688 "data_size": 63488 00:23:56.688 } 00:23:56.688 ] 00:23:56.688 }' 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.688 17:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.282 [2024-11-26 17:20:27.208275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:57.282 [2024-11-26 17:20:27.208513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62059 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62059 ']' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62059 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.282 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62059 00:23:57.540 killing process with pid 62059 00:23:57.540 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.540 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.540 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62059' 00:23:57.540 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62059 00:23:57.540 [2024-11-26 17:20:27.415356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:57.540 17:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62059 00:23:57.540 [2024-11-26 17:20:27.433139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:58.918 17:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:58.918 00:23:58.918 real 0m5.047s 00:23:58.918 user 0m7.147s 00:23:58.918 sys 0m0.972s 00:23:58.918 17:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.918 ************************************ 00:23:58.918 END TEST raid_state_function_test_sb 00:23:58.918 ************************************ 00:23:58.918 17:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.918 17:20:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:23:58.918 17:20:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:58.918 17:20:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.918 17:20:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:58.918 ************************************ 00:23:58.918 START TEST raid_superblock_test 00:23:58.918 ************************************ 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:58.918 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62311 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62311 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62311 ']' 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.919 17:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.919 [2024-11-26 17:20:28.784502] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:23:58.919 [2024-11-26 17:20:28.784664] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62311 ] 00:23:58.919 [2024-11-26 17:20:28.969711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.177 [2024-11-26 17:20:29.115504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.436 [2024-11-26 17:20:29.338879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.436 [2024-11-26 17:20:29.338949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.695 malloc1 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.695 [2024-11-26 17:20:29.682328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:59.695 [2024-11-26 17:20:29.682398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.695 [2024-11-26 17:20:29.682426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:59.695 [2024-11-26 17:20:29.682438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.695 [2024-11-26 17:20:29.685008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.695 [2024-11-26 17:20:29.685047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:59.695 pt1 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.695 malloc2 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.695 [2024-11-26 17:20:29.739989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:59.695 [2024-11-26 17:20:29.740057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.695 [2024-11-26 17:20:29.740091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:59.695 [2024-11-26 17:20:29.740103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.695 [2024-11-26 17:20:29.742718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.695 [2024-11-26 17:20:29.742758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:59.695 pt2 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:59.695 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.696 [2024-11-26 17:20:29.752041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:59.696 [2024-11-26 17:20:29.754257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.696 [2024-11-26 17:20:29.754432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:59.696 [2024-11-26 17:20:29.754446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:59.696 [2024-11-26 17:20:29.754744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:59.696 [2024-11-26 17:20:29.754901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:59.696 [2024-11-26 17:20:29.754915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:59.696 [2024-11-26 17:20:29.755081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.696 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.955 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.955 "name": "raid_bdev1", 00:23:59.955 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:23:59.955 "strip_size_kb": 64, 00:23:59.955 "state": "online", 00:23:59.955 "raid_level": "concat", 00:23:59.955 "superblock": true, 00:23:59.955 "num_base_bdevs": 2, 00:23:59.955 "num_base_bdevs_discovered": 2, 00:23:59.955 "num_base_bdevs_operational": 2, 00:23:59.955 "base_bdevs_list": [ 00:23:59.955 { 00:23:59.955 "name": "pt1", 00:23:59.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.955 "is_configured": true, 00:23:59.955 "data_offset": 2048, 00:23:59.955 "data_size": 63488 00:23:59.955 }, 00:23:59.955 { 00:23:59.955 "name": "pt2", 00:23:59.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.955 "is_configured": true, 00:23:59.955 "data_offset": 2048, 00:23:59.955 "data_size": 63488 00:23:59.955 } 00:23:59.955 ] 00:23:59.955 }' 00:23:59.955 17:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.955 17:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.214 [2024-11-26 17:20:30.199720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:00.214 "name": "raid_bdev1", 00:24:00.214 "aliases": [ 00:24:00.214 "7f0d9714-7867-4408-9edc-5dd3668398fe" 00:24:00.214 ], 00:24:00.214 "product_name": "Raid Volume", 00:24:00.214 "block_size": 512, 00:24:00.214 "num_blocks": 126976, 00:24:00.214 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:00.214 "assigned_rate_limits": { 00:24:00.214 "rw_ios_per_sec": 0, 00:24:00.214 "rw_mbytes_per_sec": 0, 00:24:00.214 "r_mbytes_per_sec": 0, 00:24:00.214 "w_mbytes_per_sec": 0 00:24:00.214 }, 00:24:00.214 "claimed": false, 00:24:00.214 "zoned": false, 00:24:00.214 "supported_io_types": { 00:24:00.214 "read": true, 00:24:00.214 "write": true, 00:24:00.214 "unmap": true, 00:24:00.214 "flush": true, 00:24:00.214 "reset": true, 00:24:00.214 "nvme_admin": false, 00:24:00.214 "nvme_io": false, 00:24:00.214 "nvme_io_md": false, 00:24:00.214 "write_zeroes": true, 00:24:00.214 "zcopy": false, 00:24:00.214 "get_zone_info": false, 00:24:00.214 "zone_management": false, 00:24:00.214 "zone_append": false, 00:24:00.214 "compare": false, 00:24:00.214 "compare_and_write": false, 00:24:00.214 "abort": false, 00:24:00.214 "seek_hole": false, 00:24:00.214 "seek_data": false, 00:24:00.214 "copy": false, 00:24:00.214 "nvme_iov_md": false 00:24:00.214 }, 00:24:00.214 "memory_domains": [ 00:24:00.214 { 00:24:00.214 "dma_device_id": "system", 00:24:00.214 "dma_device_type": 1 00:24:00.214 }, 00:24:00.214 { 00:24:00.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.214 "dma_device_type": 2 00:24:00.214 }, 00:24:00.214 { 00:24:00.214 "dma_device_id": "system", 00:24:00.214 "dma_device_type": 1 00:24:00.214 }, 00:24:00.214 { 00:24:00.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.214 "dma_device_type": 2 00:24:00.214 } 00:24:00.214 ], 00:24:00.214 "driver_specific": { 00:24:00.214 "raid": { 00:24:00.214 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:00.214 "strip_size_kb": 64, 00:24:00.214 "state": "online", 00:24:00.214 "raid_level": "concat", 00:24:00.214 "superblock": true, 00:24:00.214 "num_base_bdevs": 2, 00:24:00.214 "num_base_bdevs_discovered": 2, 00:24:00.214 "num_base_bdevs_operational": 2, 00:24:00.214 "base_bdevs_list": [ 00:24:00.214 { 00:24:00.214 "name": "pt1", 00:24:00.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.214 "is_configured": true, 00:24:00.214 "data_offset": 2048, 00:24:00.214 "data_size": 63488 00:24:00.214 }, 00:24:00.214 { 00:24:00.214 "name": "pt2", 00:24:00.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.214 "is_configured": true, 00:24:00.214 "data_offset": 2048, 00:24:00.214 "data_size": 63488 00:24:00.214 } 00:24:00.214 ] 00:24:00.214 } 00:24:00.214 } 00:24:00.214 }' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:00.214 pt2' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.214 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 [2024-11-26 17:20:30.391339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f0d9714-7867-4408-9edc-5dd3668398fe 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f0d9714-7867-4408-9edc-5dd3668398fe ']' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 [2024-11-26 17:20:30.430996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.474 [2024-11-26 17:20:30.431030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.474 [2024-11-26 17:20:30.431128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.474 [2024-11-26 17:20:30.431184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.474 [2024-11-26 17:20:30.431200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 [2024-11-26 17:20:30.550879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:00.474 [2024-11-26 17:20:30.553294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:00.474 [2024-11-26 17:20:30.553372] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:00.474 [2024-11-26 17:20:30.553443] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:00.474 [2024-11-26 17:20:30.553463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.474 [2024-11-26 17:20:30.553476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:00.474 request: 00:24:00.474 { 00:24:00.474 "name": "raid_bdev1", 00:24:00.474 "raid_level": "concat", 00:24:00.474 "base_bdevs": [ 00:24:00.474 "malloc1", 00:24:00.474 "malloc2" 00:24:00.474 ], 00:24:00.474 "strip_size_kb": 64, 00:24:00.474 "superblock": false, 00:24:00.474 "method": "bdev_raid_create", 00:24:00.474 "req_id": 1 00:24:00.474 } 00:24:00.474 Got JSON-RPC error response 00:24:00.474 response: 00:24:00.474 { 00:24:00.474 "code": -17, 00:24:00.474 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:00.474 } 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.474 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.733 [2024-11-26 17:20:30.614779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:00.733 [2024-11-26 17:20:30.614859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.733 [2024-11-26 17:20:30.614881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:00.733 [2024-11-26 17:20:30.614896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.733 [2024-11-26 17:20:30.617628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.733 [2024-11-26 17:20:30.617671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:00.733 [2024-11-26 17:20:30.617770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:00.733 [2024-11-26 17:20:30.617847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:00.733 pt1 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.733 "name": "raid_bdev1", 00:24:00.733 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:00.733 "strip_size_kb": 64, 00:24:00.733 "state": "configuring", 00:24:00.733 "raid_level": "concat", 00:24:00.733 "superblock": true, 00:24:00.733 "num_base_bdevs": 2, 00:24:00.733 "num_base_bdevs_discovered": 1, 00:24:00.733 "num_base_bdevs_operational": 2, 00:24:00.733 "base_bdevs_list": [ 00:24:00.733 { 00:24:00.733 "name": "pt1", 00:24:00.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.733 "is_configured": true, 00:24:00.733 "data_offset": 2048, 00:24:00.733 "data_size": 63488 00:24:00.733 }, 00:24:00.733 { 00:24:00.733 "name": null, 00:24:00.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.733 "is_configured": false, 00:24:00.733 "data_offset": 2048, 00:24:00.733 "data_size": 63488 00:24:00.733 } 00:24:00.733 ] 00:24:00.733 }' 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.733 17:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.992 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:00.992 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:00.992 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:00.992 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:00.992 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.993 [2024-11-26 17:20:31.026347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:00.993 [2024-11-26 17:20:31.026439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.993 [2024-11-26 17:20:31.026464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:00.993 [2024-11-26 17:20:31.026480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.993 [2024-11-26 17:20:31.027077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.993 [2024-11-26 17:20:31.027118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:00.993 [2024-11-26 17:20:31.027216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:00.993 [2024-11-26 17:20:31.027251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:00.993 [2024-11-26 17:20:31.027378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:00.993 [2024-11-26 17:20:31.027400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:00.993 [2024-11-26 17:20:31.027695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:00.993 [2024-11-26 17:20:31.027844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:00.993 [2024-11-26 17:20:31.027861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:00.993 [2024-11-26 17:20:31.028007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.993 pt2 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.993 "name": "raid_bdev1", 00:24:00.993 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:00.993 "strip_size_kb": 64, 00:24:00.993 "state": "online", 00:24:00.993 "raid_level": "concat", 00:24:00.993 "superblock": true, 00:24:00.993 "num_base_bdevs": 2, 00:24:00.993 "num_base_bdevs_discovered": 2, 00:24:00.993 "num_base_bdevs_operational": 2, 00:24:00.993 "base_bdevs_list": [ 00:24:00.993 { 00:24:00.993 "name": "pt1", 00:24:00.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.993 "is_configured": true, 00:24:00.993 "data_offset": 2048, 00:24:00.993 "data_size": 63488 00:24:00.993 }, 00:24:00.993 { 00:24:00.993 "name": "pt2", 00:24:00.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.993 "is_configured": true, 00:24:00.993 "data_offset": 2048, 00:24:00.993 "data_size": 63488 00:24:00.993 } 00:24:00.993 ] 00:24:00.993 }' 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.993 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 [2024-11-26 17:20:31.418044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.561 "name": "raid_bdev1", 00:24:01.561 "aliases": [ 00:24:01.561 "7f0d9714-7867-4408-9edc-5dd3668398fe" 00:24:01.561 ], 00:24:01.561 "product_name": "Raid Volume", 00:24:01.561 "block_size": 512, 00:24:01.561 "num_blocks": 126976, 00:24:01.561 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:01.561 "assigned_rate_limits": { 00:24:01.561 "rw_ios_per_sec": 0, 00:24:01.561 "rw_mbytes_per_sec": 0, 00:24:01.561 "r_mbytes_per_sec": 0, 00:24:01.561 "w_mbytes_per_sec": 0 00:24:01.561 }, 00:24:01.561 "claimed": false, 00:24:01.561 "zoned": false, 00:24:01.561 "supported_io_types": { 00:24:01.561 "read": true, 00:24:01.561 "write": true, 00:24:01.561 "unmap": true, 00:24:01.561 "flush": true, 00:24:01.561 "reset": true, 00:24:01.561 "nvme_admin": false, 00:24:01.561 "nvme_io": false, 00:24:01.561 "nvme_io_md": false, 00:24:01.561 "write_zeroes": true, 00:24:01.561 "zcopy": false, 00:24:01.561 "get_zone_info": false, 00:24:01.561 "zone_management": false, 00:24:01.561 "zone_append": false, 00:24:01.561 "compare": false, 00:24:01.561 "compare_and_write": false, 00:24:01.561 "abort": false, 00:24:01.561 "seek_hole": false, 00:24:01.561 "seek_data": false, 00:24:01.561 "copy": false, 00:24:01.561 "nvme_iov_md": false 00:24:01.561 }, 00:24:01.561 "memory_domains": [ 00:24:01.561 { 00:24:01.561 "dma_device_id": "system", 00:24:01.561 "dma_device_type": 1 00:24:01.561 }, 00:24:01.561 { 00:24:01.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.561 "dma_device_type": 2 00:24:01.561 }, 00:24:01.561 { 00:24:01.561 "dma_device_id": "system", 00:24:01.561 "dma_device_type": 1 00:24:01.561 }, 00:24:01.561 { 00:24:01.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.561 "dma_device_type": 2 00:24:01.561 } 00:24:01.561 ], 00:24:01.561 "driver_specific": { 00:24:01.561 "raid": { 00:24:01.561 "uuid": "7f0d9714-7867-4408-9edc-5dd3668398fe", 00:24:01.561 "strip_size_kb": 64, 00:24:01.561 "state": "online", 00:24:01.561 "raid_level": "concat", 00:24:01.561 "superblock": true, 00:24:01.561 "num_base_bdevs": 2, 00:24:01.561 "num_base_bdevs_discovered": 2, 00:24:01.561 "num_base_bdevs_operational": 2, 00:24:01.561 "base_bdevs_list": [ 00:24:01.561 { 00:24:01.561 "name": "pt1", 00:24:01.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:01.561 "is_configured": true, 00:24:01.561 "data_offset": 2048, 00:24:01.561 "data_size": 63488 00:24:01.561 }, 00:24:01.561 { 00:24:01.561 "name": "pt2", 00:24:01.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.561 "is_configured": true, 00:24:01.561 "data_offset": 2048, 00:24:01.561 "data_size": 63488 00:24:01.561 } 00:24:01.561 ] 00:24:01.561 } 00:24:01.561 } 00:24:01.561 }' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:01.561 pt2' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:01.561 [2024-11-26 17:20:31.621830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f0d9714-7867-4408-9edc-5dd3668398fe '!=' 7f0d9714-7867-4408-9edc-5dd3668398fe ']' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62311 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62311 ']' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62311 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.561 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62311 00:24:01.820 killing process with pid 62311 00:24:01.820 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.820 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.820 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62311' 00:24:01.820 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62311 00:24:01.820 [2024-11-26 17:20:31.704619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.820 [2024-11-26 17:20:31.704740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.820 17:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62311 00:24:01.820 [2024-11-26 17:20:31.704798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.820 [2024-11-26 17:20:31.704817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:01.820 [2024-11-26 17:20:31.925343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:03.199 17:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:03.199 00:24:03.199 real 0m4.444s 00:24:03.199 user 0m6.071s 00:24:03.199 sys 0m0.883s 00:24:03.199 17:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.199 ************************************ 00:24:03.199 END TEST raid_superblock_test 00:24:03.199 17:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.199 ************************************ 00:24:03.199 17:20:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:24:03.199 17:20:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:03.199 17:20:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.199 17:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:03.199 ************************************ 00:24:03.199 START TEST raid_read_error_test 00:24:03.199 ************************************ 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pCMGGrdxsx 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62523 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62523 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62523 ']' 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.199 17:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.199 [2024-11-26 17:20:33.310372] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:03.199 [2024-11-26 17:20:33.310534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:24:03.458 [2024-11-26 17:20:33.495835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.716 [2024-11-26 17:20:33.631354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.975 [2024-11-26 17:20:33.828924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.975 [2024-11-26 17:20:33.828995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.234 BaseBdev1_malloc 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.234 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.234 true 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 [2024-11-26 17:20:34.207706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:04.235 [2024-11-26 17:20:34.207776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.235 [2024-11-26 17:20:34.207803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:04.235 [2024-11-26 17:20:34.207819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.235 [2024-11-26 17:20:34.210371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.235 [2024-11-26 17:20:34.210417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:04.235 BaseBdev1 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 BaseBdev2_malloc 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 true 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 [2024-11-26 17:20:34.273715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:04.235 [2024-11-26 17:20:34.273779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.235 [2024-11-26 17:20:34.273799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:04.235 [2024-11-26 17:20:34.273814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.235 [2024-11-26 17:20:34.276312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.235 [2024-11-26 17:20:34.276356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:04.235 BaseBdev2 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 [2024-11-26 17:20:34.285762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:04.235 [2024-11-26 17:20:34.288123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:04.235 [2024-11-26 17:20:34.288445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:04.235 [2024-11-26 17:20:34.288573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:04.235 [2024-11-26 17:20:34.288885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:04.235 [2024-11-26 17:20:34.289101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:04.235 [2024-11-26 17:20:34.289146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:04.235 [2024-11-26 17:20:34.289416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.235 "name": "raid_bdev1", 00:24:04.235 "uuid": "5408914d-f37e-47ec-a4d1-404db8094e26", 00:24:04.235 "strip_size_kb": 64, 00:24:04.235 "state": "online", 00:24:04.235 "raid_level": "concat", 00:24:04.235 "superblock": true, 00:24:04.235 "num_base_bdevs": 2, 00:24:04.235 "num_base_bdevs_discovered": 2, 00:24:04.235 "num_base_bdevs_operational": 2, 00:24:04.235 "base_bdevs_list": [ 00:24:04.235 { 00:24:04.235 "name": "BaseBdev1", 00:24:04.235 "uuid": "41de42f9-a733-55b0-b4a4-bc2550b38013", 00:24:04.235 "is_configured": true, 00:24:04.235 "data_offset": 2048, 00:24:04.235 "data_size": 63488 00:24:04.235 }, 00:24:04.235 { 00:24:04.235 "name": "BaseBdev2", 00:24:04.235 "uuid": "0575d743-eb7b-516c-b5b2-617435df65b5", 00:24:04.235 "is_configured": true, 00:24:04.235 "data_offset": 2048, 00:24:04.235 "data_size": 63488 00:24:04.235 } 00:24:04.235 ] 00:24:04.235 }' 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.235 17:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.832 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:04.832 17:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:04.832 [2024-11-26 17:20:34.787171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.767 "name": "raid_bdev1", 00:24:05.767 "uuid": "5408914d-f37e-47ec-a4d1-404db8094e26", 00:24:05.767 "strip_size_kb": 64, 00:24:05.767 "state": "online", 00:24:05.767 "raid_level": "concat", 00:24:05.767 "superblock": true, 00:24:05.767 "num_base_bdevs": 2, 00:24:05.767 "num_base_bdevs_discovered": 2, 00:24:05.767 "num_base_bdevs_operational": 2, 00:24:05.767 "base_bdevs_list": [ 00:24:05.767 { 00:24:05.767 "name": "BaseBdev1", 00:24:05.767 "uuid": "41de42f9-a733-55b0-b4a4-bc2550b38013", 00:24:05.767 "is_configured": true, 00:24:05.767 "data_offset": 2048, 00:24:05.767 "data_size": 63488 00:24:05.767 }, 00:24:05.767 { 00:24:05.767 "name": "BaseBdev2", 00:24:05.767 "uuid": "0575d743-eb7b-516c-b5b2-617435df65b5", 00:24:05.767 "is_configured": true, 00:24:05.767 "data_offset": 2048, 00:24:05.767 "data_size": 63488 00:24:05.767 } 00:24:05.767 ] 00:24:05.767 }' 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.767 17:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 17:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:06.027 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.027 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 [2024-11-26 17:20:36.133580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.027 [2024-11-26 17:20:36.133794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.027 [2024-11-26 17:20:36.136814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.027 [2024-11-26 17:20:36.136977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.027 [2024-11-26 17:20:36.137030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.027 [2024-11-26 17:20:36.137049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:06.286 { 00:24:06.286 "results": [ 00:24:06.286 { 00:24:06.286 "job": "raid_bdev1", 00:24:06.286 "core_mask": "0x1", 00:24:06.286 "workload": "randrw", 00:24:06.286 "percentage": 50, 00:24:06.286 "status": "finished", 00:24:06.286 "queue_depth": 1, 00:24:06.286 "io_size": 131072, 00:24:06.286 "runtime": 1.346581, 00:24:06.286 "iops": 15845.314912359523, 00:24:06.286 "mibps": 1980.6643640449404, 00:24:06.286 "io_failed": 1, 00:24:06.286 "io_timeout": 0, 00:24:06.286 "avg_latency_us": 87.54688285431537, 00:24:06.286 "min_latency_us": 26.730923694779115, 00:24:06.286 "max_latency_us": 1447.5823293172691 00:24:06.286 } 00:24:06.286 ], 00:24:06.286 "core_count": 1 00:24:06.286 } 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62523 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62523 ']' 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62523 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62523 00:24:06.286 killing process with pid 62523 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62523' 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62523 00:24:06.286 [2024-11-26 17:20:36.187941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:06.286 17:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62523 00:24:06.286 [2024-11-26 17:20:36.335733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:07.662 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pCMGGrdxsx 00:24:07.662 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:07.662 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:07.662 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:24:07.663 00:24:07.663 real 0m4.407s 00:24:07.663 user 0m5.136s 00:24:07.663 sys 0m0.656s 00:24:07.663 ************************************ 00:24:07.663 END TEST raid_read_error_test 00:24:07.663 ************************************ 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.663 17:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.663 17:20:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:24:07.663 17:20:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:07.663 17:20:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.663 17:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:07.663 ************************************ 00:24:07.663 START TEST raid_write_error_test 00:24:07.663 ************************************ 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6iKfQpN490 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62663 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62663 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62663 ']' 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.663 17:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.922 [2024-11-26 17:20:37.793694] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:07.922 [2024-11-26 17:20:37.793980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62663 ] 00:24:07.922 [2024-11-26 17:20:37.977966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.181 [2024-11-26 17:20:38.123871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.440 [2024-11-26 17:20:38.351746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.440 [2024-11-26 17:20:38.351802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 BaseBdev1_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 true 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 [2024-11-26 17:20:38.727053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:08.700 [2024-11-26 17:20:38.727247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.700 [2024-11-26 17:20:38.727281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:08.700 [2024-11-26 17:20:38.727296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.700 [2024-11-26 17:20:38.729865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.700 [2024-11-26 17:20:38.729910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:08.700 BaseBdev1 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 BaseBdev2_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 true 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.700 [2024-11-26 17:20:38.801350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:08.700 [2024-11-26 17:20:38.801575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.700 [2024-11-26 17:20:38.801608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:08.700 [2024-11-26 17:20:38.801624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.700 [2024-11-26 17:20:38.804304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.700 [2024-11-26 17:20:38.804351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:08.700 BaseBdev2 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.700 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.968 [2024-11-26 17:20:38.813437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:08.968 [2024-11-26 17:20:38.815924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.968 [2024-11-26 17:20:38.816274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:08.968 [2024-11-26 17:20:38.816391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:08.968 [2024-11-26 17:20:38.816768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:08.968 [2024-11-26 17:20:38.817014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:08.968 [2024-11-26 17:20:38.817059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:08.968 [2024-11-26 17:20:38.817403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.968 "name": "raid_bdev1", 00:24:08.968 "uuid": "e7b92eea-8cf9-496c-99ad-62e8eb8839fa", 00:24:08.968 "strip_size_kb": 64, 00:24:08.968 "state": "online", 00:24:08.968 "raid_level": "concat", 00:24:08.968 "superblock": true, 00:24:08.968 "num_base_bdevs": 2, 00:24:08.968 "num_base_bdevs_discovered": 2, 00:24:08.968 "num_base_bdevs_operational": 2, 00:24:08.968 "base_bdevs_list": [ 00:24:08.968 { 00:24:08.968 "name": "BaseBdev1", 00:24:08.968 "uuid": "f5710f60-9e59-5104-9907-97149f23c854", 00:24:08.968 "is_configured": true, 00:24:08.968 "data_offset": 2048, 00:24:08.968 "data_size": 63488 00:24:08.968 }, 00:24:08.968 { 00:24:08.968 "name": "BaseBdev2", 00:24:08.968 "uuid": "6ef39c05-343a-5d45-bbeb-f7166e2e3fcb", 00:24:08.968 "is_configured": true, 00:24:08.968 "data_offset": 2048, 00:24:08.968 "data_size": 63488 00:24:08.968 } 00:24:08.968 ] 00:24:08.968 }' 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.968 17:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.254 17:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:09.254 17:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:09.254 [2024-11-26 17:20:39.350437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.193 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.452 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.452 "name": "raid_bdev1", 00:24:10.452 "uuid": "e7b92eea-8cf9-496c-99ad-62e8eb8839fa", 00:24:10.452 "strip_size_kb": 64, 00:24:10.452 "state": "online", 00:24:10.452 "raid_level": "concat", 00:24:10.452 "superblock": true, 00:24:10.452 "num_base_bdevs": 2, 00:24:10.452 "num_base_bdevs_discovered": 2, 00:24:10.452 "num_base_bdevs_operational": 2, 00:24:10.452 "base_bdevs_list": [ 00:24:10.452 { 00:24:10.452 "name": "BaseBdev1", 00:24:10.452 "uuid": "f5710f60-9e59-5104-9907-97149f23c854", 00:24:10.452 "is_configured": true, 00:24:10.452 "data_offset": 2048, 00:24:10.452 "data_size": 63488 00:24:10.452 }, 00:24:10.452 { 00:24:10.452 "name": "BaseBdev2", 00:24:10.452 "uuid": "6ef39c05-343a-5d45-bbeb-f7166e2e3fcb", 00:24:10.452 "is_configured": true, 00:24:10.452 "data_offset": 2048, 00:24:10.452 "data_size": 63488 00:24:10.452 } 00:24:10.452 ] 00:24:10.452 }' 00:24:10.452 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.452 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.712 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:10.712 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 [2024-11-26 17:20:40.710081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.713 [2024-11-26 17:20:40.710128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.713 [2024-11-26 17:20:40.712787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.713 [2024-11-26 17:20:40.712843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.713 [2024-11-26 17:20:40.712878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.713 [2024-11-26 17:20:40.712894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:10.713 { 00:24:10.713 "results": [ 00:24:10.713 { 00:24:10.713 "job": "raid_bdev1", 00:24:10.713 "core_mask": "0x1", 00:24:10.713 "workload": "randrw", 00:24:10.713 "percentage": 50, 00:24:10.713 "status": "finished", 00:24:10.713 "queue_depth": 1, 00:24:10.713 "io_size": 131072, 00:24:10.713 "runtime": 1.359017, 00:24:10.713 "iops": 15152.128339822091, 00:24:10.713 "mibps": 1894.0160424777614, 00:24:10.713 "io_failed": 1, 00:24:10.713 "io_timeout": 0, 00:24:10.713 "avg_latency_us": 91.78480510689384, 00:24:10.713 "min_latency_us": 26.52530120481928, 00:24:10.713 "max_latency_us": 1441.0024096385541 00:24:10.713 } 00:24:10.713 ], 00:24:10.713 "core_count": 1 00:24:10.713 } 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62663 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62663 ']' 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62663 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62663 00:24:10.713 killing process with pid 62663 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62663' 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62663 00:24:10.713 [2024-11-26 17:20:40.762413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:10.713 17:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62663 00:24:10.972 [2024-11-26 17:20:40.904285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6iKfQpN490 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:24:12.350 00:24:12.350 real 0m4.476s 00:24:12.350 user 0m5.276s 00:24:12.350 sys 0m0.652s 00:24:12.350 ************************************ 00:24:12.350 END TEST raid_write_error_test 00:24:12.350 ************************************ 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.350 17:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 17:20:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:12.350 17:20:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:24:12.350 17:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:12.350 17:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.350 17:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 ************************************ 00:24:12.350 START TEST raid_state_function_test 00:24:12.350 ************************************ 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:12.350 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:12.351 Process raid pid: 62806 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62806 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62806' 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62806 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62806 ']' 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.351 17:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.351 [2024-11-26 17:20:42.342169] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:12.351 [2024-11-26 17:20:42.342481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.609 [2024-11-26 17:20:42.531260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.609 [2024-11-26 17:20:42.696662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.870 [2024-11-26 17:20:42.929355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.870 [2024-11-26 17:20:42.929413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.129 [2024-11-26 17:20:43.192242] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:13.129 [2024-11-26 17:20:43.192441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:13.129 [2024-11-26 17:20:43.192552] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.129 [2024-11-26 17:20:43.192601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.129 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.387 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.387 "name": "Existed_Raid", 00:24:13.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.387 "strip_size_kb": 0, 00:24:13.387 "state": "configuring", 00:24:13.387 "raid_level": "raid1", 00:24:13.387 "superblock": false, 00:24:13.387 "num_base_bdevs": 2, 00:24:13.387 "num_base_bdevs_discovered": 0, 00:24:13.387 "num_base_bdevs_operational": 2, 00:24:13.387 "base_bdevs_list": [ 00:24:13.387 { 00:24:13.387 "name": "BaseBdev1", 00:24:13.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.387 "is_configured": false, 00:24:13.387 "data_offset": 0, 00:24:13.387 "data_size": 0 00:24:13.387 }, 00:24:13.387 { 00:24:13.387 "name": "BaseBdev2", 00:24:13.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.387 "is_configured": false, 00:24:13.387 "data_offset": 0, 00:24:13.387 "data_size": 0 00:24:13.387 } 00:24:13.387 ] 00:24:13.387 }' 00:24:13.387 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.387 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 [2024-11-26 17:20:43.599699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.646 [2024-11-26 17:20:43.599772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 [2024-11-26 17:20:43.611648] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:13.646 [2024-11-26 17:20:43.611837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:13.646 [2024-11-26 17:20:43.611934] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.646 [2024-11-26 17:20:43.611983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 [2024-11-26 17:20:43.664152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.646 BaseBdev1 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 [ 00:24:13.646 { 00:24:13.646 "name": "BaseBdev1", 00:24:13.646 "aliases": [ 00:24:13.646 "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85" 00:24:13.646 ], 00:24:13.646 "product_name": "Malloc disk", 00:24:13.646 "block_size": 512, 00:24:13.646 "num_blocks": 65536, 00:24:13.646 "uuid": "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85", 00:24:13.646 "assigned_rate_limits": { 00:24:13.646 "rw_ios_per_sec": 0, 00:24:13.646 "rw_mbytes_per_sec": 0, 00:24:13.646 "r_mbytes_per_sec": 0, 00:24:13.646 "w_mbytes_per_sec": 0 00:24:13.646 }, 00:24:13.646 "claimed": true, 00:24:13.646 "claim_type": "exclusive_write", 00:24:13.646 "zoned": false, 00:24:13.646 "supported_io_types": { 00:24:13.646 "read": true, 00:24:13.646 "write": true, 00:24:13.646 "unmap": true, 00:24:13.646 "flush": true, 00:24:13.646 "reset": true, 00:24:13.646 "nvme_admin": false, 00:24:13.646 "nvme_io": false, 00:24:13.646 "nvme_io_md": false, 00:24:13.646 "write_zeroes": true, 00:24:13.646 "zcopy": true, 00:24:13.646 "get_zone_info": false, 00:24:13.646 "zone_management": false, 00:24:13.646 "zone_append": false, 00:24:13.646 "compare": false, 00:24:13.646 "compare_and_write": false, 00:24:13.646 "abort": true, 00:24:13.646 "seek_hole": false, 00:24:13.646 "seek_data": false, 00:24:13.646 "copy": true, 00:24:13.646 "nvme_iov_md": false 00:24:13.646 }, 00:24:13.646 "memory_domains": [ 00:24:13.646 { 00:24:13.646 "dma_device_id": "system", 00:24:13.646 "dma_device_type": 1 00:24:13.646 }, 00:24:13.646 { 00:24:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.646 "dma_device_type": 2 00:24:13.646 } 00:24:13.646 ], 00:24:13.646 "driver_specific": {} 00:24:13.646 } 00:24:13.646 ] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.646 "name": "Existed_Raid", 00:24:13.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.646 "strip_size_kb": 0, 00:24:13.646 "state": "configuring", 00:24:13.646 "raid_level": "raid1", 00:24:13.646 "superblock": false, 00:24:13.646 "num_base_bdevs": 2, 00:24:13.646 "num_base_bdevs_discovered": 1, 00:24:13.646 "num_base_bdevs_operational": 2, 00:24:13.646 "base_bdevs_list": [ 00:24:13.646 { 00:24:13.646 "name": "BaseBdev1", 00:24:13.646 "uuid": "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85", 00:24:13.646 "is_configured": true, 00:24:13.646 "data_offset": 0, 00:24:13.646 "data_size": 65536 00:24:13.646 }, 00:24:13.646 { 00:24:13.646 "name": "BaseBdev2", 00:24:13.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.646 "is_configured": false, 00:24:13.646 "data_offset": 0, 00:24:13.646 "data_size": 0 00:24:13.646 } 00:24:13.646 ] 00:24:13.646 }' 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.646 17:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 [2024-11-26 17:20:44.131698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:14.216 [2024-11-26 17:20:44.131769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 [2024-11-26 17:20:44.143732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:14.216 [2024-11-26 17:20:44.146170] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:14.216 [2024-11-26 17:20:44.146349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.216 "name": "Existed_Raid", 00:24:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.216 "strip_size_kb": 0, 00:24:14.216 "state": "configuring", 00:24:14.216 "raid_level": "raid1", 00:24:14.216 "superblock": false, 00:24:14.216 "num_base_bdevs": 2, 00:24:14.216 "num_base_bdevs_discovered": 1, 00:24:14.216 "num_base_bdevs_operational": 2, 00:24:14.216 "base_bdevs_list": [ 00:24:14.216 { 00:24:14.216 "name": "BaseBdev1", 00:24:14.216 "uuid": "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85", 00:24:14.216 "is_configured": true, 00:24:14.216 "data_offset": 0, 00:24:14.216 "data_size": 65536 00:24:14.216 }, 00:24:14.216 { 00:24:14.216 "name": "BaseBdev2", 00:24:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.216 "is_configured": false, 00:24:14.216 "data_offset": 0, 00:24:14.216 "data_size": 0 00:24:14.216 } 00:24:14.216 ] 00:24:14.216 }' 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.216 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.475 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:14.475 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.475 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.735 [2024-11-26 17:20:44.624481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.735 [2024-11-26 17:20:44.624575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:14.735 [2024-11-26 17:20:44.624588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:14.735 [2024-11-26 17:20:44.624879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:14.735 [2024-11-26 17:20:44.625067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:14.735 [2024-11-26 17:20:44.625082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:14.735 BaseBdev2 00:24:14.735 [2024-11-26 17:20:44.625351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.735 [ 00:24:14.735 { 00:24:14.735 "name": "BaseBdev2", 00:24:14.735 "aliases": [ 00:24:14.735 "43e1b45b-55f7-45ec-8480-2fdffb338711" 00:24:14.735 ], 00:24:14.735 "product_name": "Malloc disk", 00:24:14.735 "block_size": 512, 00:24:14.735 "num_blocks": 65536, 00:24:14.735 "uuid": "43e1b45b-55f7-45ec-8480-2fdffb338711", 00:24:14.735 "assigned_rate_limits": { 00:24:14.735 "rw_ios_per_sec": 0, 00:24:14.735 "rw_mbytes_per_sec": 0, 00:24:14.735 "r_mbytes_per_sec": 0, 00:24:14.735 "w_mbytes_per_sec": 0 00:24:14.735 }, 00:24:14.735 "claimed": true, 00:24:14.735 "claim_type": "exclusive_write", 00:24:14.735 "zoned": false, 00:24:14.735 "supported_io_types": { 00:24:14.735 "read": true, 00:24:14.735 "write": true, 00:24:14.735 "unmap": true, 00:24:14.735 "flush": true, 00:24:14.735 "reset": true, 00:24:14.735 "nvme_admin": false, 00:24:14.735 "nvme_io": false, 00:24:14.735 "nvme_io_md": false, 00:24:14.735 "write_zeroes": true, 00:24:14.735 "zcopy": true, 00:24:14.735 "get_zone_info": false, 00:24:14.735 "zone_management": false, 00:24:14.735 "zone_append": false, 00:24:14.735 "compare": false, 00:24:14.735 "compare_and_write": false, 00:24:14.735 "abort": true, 00:24:14.735 "seek_hole": false, 00:24:14.735 "seek_data": false, 00:24:14.735 "copy": true, 00:24:14.735 "nvme_iov_md": false 00:24:14.735 }, 00:24:14.735 "memory_domains": [ 00:24:14.735 { 00:24:14.735 "dma_device_id": "system", 00:24:14.735 "dma_device_type": 1 00:24:14.735 }, 00:24:14.735 { 00:24:14.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.735 "dma_device_type": 2 00:24:14.735 } 00:24:14.735 ], 00:24:14.735 "driver_specific": {} 00:24:14.735 } 00:24:14.735 ] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.735 "name": "Existed_Raid", 00:24:14.735 "uuid": "137be6b2-2815-4287-b242-cee87e7eeb20", 00:24:14.735 "strip_size_kb": 0, 00:24:14.735 "state": "online", 00:24:14.735 "raid_level": "raid1", 00:24:14.735 "superblock": false, 00:24:14.735 "num_base_bdevs": 2, 00:24:14.735 "num_base_bdevs_discovered": 2, 00:24:14.735 "num_base_bdevs_operational": 2, 00:24:14.735 "base_bdevs_list": [ 00:24:14.735 { 00:24:14.735 "name": "BaseBdev1", 00:24:14.735 "uuid": "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85", 00:24:14.735 "is_configured": true, 00:24:14.735 "data_offset": 0, 00:24:14.735 "data_size": 65536 00:24:14.735 }, 00:24:14.735 { 00:24:14.735 "name": "BaseBdev2", 00:24:14.735 "uuid": "43e1b45b-55f7-45ec-8480-2fdffb338711", 00:24:14.735 "is_configured": true, 00:24:14.735 "data_offset": 0, 00:24:14.735 "data_size": 65536 00:24:14.735 } 00:24:14.735 ] 00:24:14.735 }' 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.735 17:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:14.996 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.257 [2024-11-26 17:20:45.116158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.257 "name": "Existed_Raid", 00:24:15.257 "aliases": [ 00:24:15.257 "137be6b2-2815-4287-b242-cee87e7eeb20" 00:24:15.257 ], 00:24:15.257 "product_name": "Raid Volume", 00:24:15.257 "block_size": 512, 00:24:15.257 "num_blocks": 65536, 00:24:15.257 "uuid": "137be6b2-2815-4287-b242-cee87e7eeb20", 00:24:15.257 "assigned_rate_limits": { 00:24:15.257 "rw_ios_per_sec": 0, 00:24:15.257 "rw_mbytes_per_sec": 0, 00:24:15.257 "r_mbytes_per_sec": 0, 00:24:15.257 "w_mbytes_per_sec": 0 00:24:15.257 }, 00:24:15.257 "claimed": false, 00:24:15.257 "zoned": false, 00:24:15.257 "supported_io_types": { 00:24:15.257 "read": true, 00:24:15.257 "write": true, 00:24:15.257 "unmap": false, 00:24:15.257 "flush": false, 00:24:15.257 "reset": true, 00:24:15.257 "nvme_admin": false, 00:24:15.257 "nvme_io": false, 00:24:15.257 "nvme_io_md": false, 00:24:15.257 "write_zeroes": true, 00:24:15.257 "zcopy": false, 00:24:15.257 "get_zone_info": false, 00:24:15.257 "zone_management": false, 00:24:15.257 "zone_append": false, 00:24:15.257 "compare": false, 00:24:15.257 "compare_and_write": false, 00:24:15.257 "abort": false, 00:24:15.257 "seek_hole": false, 00:24:15.257 "seek_data": false, 00:24:15.257 "copy": false, 00:24:15.257 "nvme_iov_md": false 00:24:15.257 }, 00:24:15.257 "memory_domains": [ 00:24:15.257 { 00:24:15.257 "dma_device_id": "system", 00:24:15.257 "dma_device_type": 1 00:24:15.257 }, 00:24:15.257 { 00:24:15.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.257 "dma_device_type": 2 00:24:15.257 }, 00:24:15.257 { 00:24:15.257 "dma_device_id": "system", 00:24:15.257 "dma_device_type": 1 00:24:15.257 }, 00:24:15.257 { 00:24:15.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.257 "dma_device_type": 2 00:24:15.257 } 00:24:15.257 ], 00:24:15.257 "driver_specific": { 00:24:15.257 "raid": { 00:24:15.257 "uuid": "137be6b2-2815-4287-b242-cee87e7eeb20", 00:24:15.257 "strip_size_kb": 0, 00:24:15.257 "state": "online", 00:24:15.257 "raid_level": "raid1", 00:24:15.257 "superblock": false, 00:24:15.257 "num_base_bdevs": 2, 00:24:15.257 "num_base_bdevs_discovered": 2, 00:24:15.257 "num_base_bdevs_operational": 2, 00:24:15.257 "base_bdevs_list": [ 00:24:15.257 { 00:24:15.257 "name": "BaseBdev1", 00:24:15.257 "uuid": "4cbf7d1b-960c-4c98-901f-e9dd97f7cd85", 00:24:15.257 "is_configured": true, 00:24:15.257 "data_offset": 0, 00:24:15.257 "data_size": 65536 00:24:15.257 }, 00:24:15.257 { 00:24:15.257 "name": "BaseBdev2", 00:24:15.257 "uuid": "43e1b45b-55f7-45ec-8480-2fdffb338711", 00:24:15.257 "is_configured": true, 00:24:15.257 "data_offset": 0, 00:24:15.257 "data_size": 65536 00:24:15.257 } 00:24:15.257 ] 00:24:15.257 } 00:24:15.257 } 00:24:15.257 }' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:15.257 BaseBdev2' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.257 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.257 [2024-11-26 17:20:45.327710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.517 "name": "Existed_Raid", 00:24:15.517 "uuid": "137be6b2-2815-4287-b242-cee87e7eeb20", 00:24:15.517 "strip_size_kb": 0, 00:24:15.517 "state": "online", 00:24:15.517 "raid_level": "raid1", 00:24:15.517 "superblock": false, 00:24:15.517 "num_base_bdevs": 2, 00:24:15.517 "num_base_bdevs_discovered": 1, 00:24:15.517 "num_base_bdevs_operational": 1, 00:24:15.517 "base_bdevs_list": [ 00:24:15.517 { 00:24:15.517 "name": null, 00:24:15.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.517 "is_configured": false, 00:24:15.517 "data_offset": 0, 00:24:15.517 "data_size": 65536 00:24:15.517 }, 00:24:15.517 { 00:24:15.517 "name": "BaseBdev2", 00:24:15.517 "uuid": "43e1b45b-55f7-45ec-8480-2fdffb338711", 00:24:15.517 "is_configured": true, 00:24:15.517 "data_offset": 0, 00:24:15.517 "data_size": 65536 00:24:15.517 } 00:24:15.517 ] 00:24:15.517 }' 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.517 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.084 17:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.084 [2024-11-26 17:20:45.958590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:16.084 [2024-11-26 17:20:45.958843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.084 [2024-11-26 17:20:46.057453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.084 [2024-11-26 17:20:46.057774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.084 [2024-11-26 17:20:46.057824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62806 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62806 ']' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62806 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62806 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.084 killing process with pid 62806 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62806' 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62806 00:24:16.084 [2024-11-26 17:20:46.150822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.084 17:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62806 00:24:16.084 [2024-11-26 17:20:46.168266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:17.463 00:24:17.463 real 0m5.148s 00:24:17.463 user 0m7.245s 00:24:17.463 sys 0m1.036s 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.463 ************************************ 00:24:17.463 END TEST raid_state_function_test 00:24:17.463 ************************************ 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.463 17:20:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:24:17.463 17:20:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:17.463 17:20:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.463 17:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.463 ************************************ 00:24:17.463 START TEST raid_state_function_test_sb 00:24:17.463 ************************************ 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63063 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:17.463 Process raid pid: 63063 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63063' 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63063 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63063 ']' 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.463 17:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.463 [2024-11-26 17:20:47.550956] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:17.463 [2024-11-26 17:20:47.551097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.722 [2024-11-26 17:20:47.737300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.981 [2024-11-26 17:20:47.886931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.239 [2024-11-26 17:20:48.123563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.239 [2024-11-26 17:20:48.123617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.498 [2024-11-26 17:20:48.430073] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:18.498 [2024-11-26 17:20:48.430142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:18.498 [2024-11-26 17:20:48.430155] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:18.498 [2024-11-26 17:20:48.430169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.498 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.498 "name": "Existed_Raid", 00:24:18.498 "uuid": "ea23b679-37b7-456b-93ac-61e16b35c30b", 00:24:18.499 "strip_size_kb": 0, 00:24:18.499 "state": "configuring", 00:24:18.499 "raid_level": "raid1", 00:24:18.499 "superblock": true, 00:24:18.499 "num_base_bdevs": 2, 00:24:18.499 "num_base_bdevs_discovered": 0, 00:24:18.499 "num_base_bdevs_operational": 2, 00:24:18.499 "base_bdevs_list": [ 00:24:18.499 { 00:24:18.499 "name": "BaseBdev1", 00:24:18.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.499 "is_configured": false, 00:24:18.499 "data_offset": 0, 00:24:18.499 "data_size": 0 00:24:18.499 }, 00:24:18.499 { 00:24:18.499 "name": "BaseBdev2", 00:24:18.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.499 "is_configured": false, 00:24:18.499 "data_offset": 0, 00:24:18.499 "data_size": 0 00:24:18.499 } 00:24:18.499 ] 00:24:18.499 }' 00:24:18.499 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.499 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:18.758 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.758 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 [2024-11-26 17:20:48.869633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:18.758 [2024-11-26 17:20:48.869680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 [2024-11-26 17:20:48.881610] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.017 [2024-11-26 17:20:48.881659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.017 [2024-11-26 17:20:48.881670] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.017 [2024-11-26 17:20:48.881687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 [2024-11-26 17:20:48.932698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.017 BaseBdev1 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.017 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 [ 00:24:19.017 { 00:24:19.017 "name": "BaseBdev1", 00:24:19.017 "aliases": [ 00:24:19.017 "bccb2545-f8a7-4951-b716-f964a85e4ee3" 00:24:19.017 ], 00:24:19.017 "product_name": "Malloc disk", 00:24:19.017 "block_size": 512, 00:24:19.017 "num_blocks": 65536, 00:24:19.017 "uuid": "bccb2545-f8a7-4951-b716-f964a85e4ee3", 00:24:19.017 "assigned_rate_limits": { 00:24:19.017 "rw_ios_per_sec": 0, 00:24:19.017 "rw_mbytes_per_sec": 0, 00:24:19.017 "r_mbytes_per_sec": 0, 00:24:19.017 "w_mbytes_per_sec": 0 00:24:19.017 }, 00:24:19.017 "claimed": true, 00:24:19.017 "claim_type": "exclusive_write", 00:24:19.017 "zoned": false, 00:24:19.017 "supported_io_types": { 00:24:19.017 "read": true, 00:24:19.017 "write": true, 00:24:19.017 "unmap": true, 00:24:19.017 "flush": true, 00:24:19.017 "reset": true, 00:24:19.017 "nvme_admin": false, 00:24:19.017 "nvme_io": false, 00:24:19.017 "nvme_io_md": false, 00:24:19.017 "write_zeroes": true, 00:24:19.017 "zcopy": true, 00:24:19.017 "get_zone_info": false, 00:24:19.017 "zone_management": false, 00:24:19.017 "zone_append": false, 00:24:19.017 "compare": false, 00:24:19.017 "compare_and_write": false, 00:24:19.017 "abort": true, 00:24:19.017 "seek_hole": false, 00:24:19.017 "seek_data": false, 00:24:19.017 "copy": true, 00:24:19.017 "nvme_iov_md": false 00:24:19.017 }, 00:24:19.017 "memory_domains": [ 00:24:19.017 { 00:24:19.017 "dma_device_id": "system", 00:24:19.017 "dma_device_type": 1 00:24:19.018 }, 00:24:19.018 { 00:24:19.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.018 "dma_device_type": 2 00:24:19.018 } 00:24:19.018 ], 00:24:19.018 "driver_specific": {} 00:24:19.018 } 00:24:19.018 ] 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.018 17:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.018 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.018 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.018 "name": "Existed_Raid", 00:24:19.018 "uuid": "f55c8514-265f-49d0-85c7-c77b4b71fff1", 00:24:19.018 "strip_size_kb": 0, 00:24:19.018 "state": "configuring", 00:24:19.018 "raid_level": "raid1", 00:24:19.018 "superblock": true, 00:24:19.018 "num_base_bdevs": 2, 00:24:19.018 "num_base_bdevs_discovered": 1, 00:24:19.018 "num_base_bdevs_operational": 2, 00:24:19.018 "base_bdevs_list": [ 00:24:19.018 { 00:24:19.018 "name": "BaseBdev1", 00:24:19.018 "uuid": "bccb2545-f8a7-4951-b716-f964a85e4ee3", 00:24:19.018 "is_configured": true, 00:24:19.018 "data_offset": 2048, 00:24:19.018 "data_size": 63488 00:24:19.018 }, 00:24:19.018 { 00:24:19.018 "name": "BaseBdev2", 00:24:19.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.018 "is_configured": false, 00:24:19.018 "data_offset": 0, 00:24:19.018 "data_size": 0 00:24:19.018 } 00:24:19.018 ] 00:24:19.018 }' 00:24:19.018 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.018 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.583 [2024-11-26 17:20:49.392684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:19.583 [2024-11-26 17:20:49.392752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.583 [2024-11-26 17:20:49.400719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.583 [2024-11-26 17:20:49.403089] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.583 [2024-11-26 17:20:49.403140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.583 "name": "Existed_Raid", 00:24:19.583 "uuid": "c6519877-720c-4741-a253-b5b104a71768", 00:24:19.583 "strip_size_kb": 0, 00:24:19.583 "state": "configuring", 00:24:19.583 "raid_level": "raid1", 00:24:19.583 "superblock": true, 00:24:19.583 "num_base_bdevs": 2, 00:24:19.583 "num_base_bdevs_discovered": 1, 00:24:19.583 "num_base_bdevs_operational": 2, 00:24:19.583 "base_bdevs_list": [ 00:24:19.583 { 00:24:19.583 "name": "BaseBdev1", 00:24:19.583 "uuid": "bccb2545-f8a7-4951-b716-f964a85e4ee3", 00:24:19.583 "is_configured": true, 00:24:19.583 "data_offset": 2048, 00:24:19.583 "data_size": 63488 00:24:19.583 }, 00:24:19.583 { 00:24:19.583 "name": "BaseBdev2", 00:24:19.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.583 "is_configured": false, 00:24:19.583 "data_offset": 0, 00:24:19.583 "data_size": 0 00:24:19.583 } 00:24:19.583 ] 00:24:19.583 }' 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.583 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.841 [2024-11-26 17:20:49.895087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:19.841 [2024-11-26 17:20:49.895386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:19.841 [2024-11-26 17:20:49.895404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:19.841 [2024-11-26 17:20:49.895758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:19.841 [2024-11-26 17:20:49.895952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:19.841 [2024-11-26 17:20:49.895979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:19.841 BaseBdev2 00:24:19.841 [2024-11-26 17:20:49.896139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.841 [ 00:24:19.841 { 00:24:19.841 "name": "BaseBdev2", 00:24:19.841 "aliases": [ 00:24:19.841 "87844b10-8137-4b4b-b79c-3c0a343de0a7" 00:24:19.841 ], 00:24:19.841 "product_name": "Malloc disk", 00:24:19.841 "block_size": 512, 00:24:19.841 "num_blocks": 65536, 00:24:19.841 "uuid": "87844b10-8137-4b4b-b79c-3c0a343de0a7", 00:24:19.841 "assigned_rate_limits": { 00:24:19.841 "rw_ios_per_sec": 0, 00:24:19.841 "rw_mbytes_per_sec": 0, 00:24:19.841 "r_mbytes_per_sec": 0, 00:24:19.841 "w_mbytes_per_sec": 0 00:24:19.841 }, 00:24:19.841 "claimed": true, 00:24:19.841 "claim_type": "exclusive_write", 00:24:19.841 "zoned": false, 00:24:19.841 "supported_io_types": { 00:24:19.841 "read": true, 00:24:19.841 "write": true, 00:24:19.841 "unmap": true, 00:24:19.841 "flush": true, 00:24:19.841 "reset": true, 00:24:19.841 "nvme_admin": false, 00:24:19.841 "nvme_io": false, 00:24:19.841 "nvme_io_md": false, 00:24:19.841 "write_zeroes": true, 00:24:19.841 "zcopy": true, 00:24:19.841 "get_zone_info": false, 00:24:19.841 "zone_management": false, 00:24:19.841 "zone_append": false, 00:24:19.841 "compare": false, 00:24:19.841 "compare_and_write": false, 00:24:19.841 "abort": true, 00:24:19.841 "seek_hole": false, 00:24:19.841 "seek_data": false, 00:24:19.841 "copy": true, 00:24:19.841 "nvme_iov_md": false 00:24:19.841 }, 00:24:19.841 "memory_domains": [ 00:24:19.841 { 00:24:19.841 "dma_device_id": "system", 00:24:19.841 "dma_device_type": 1 00:24:19.841 }, 00:24:19.841 { 00:24:19.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.841 "dma_device_type": 2 00:24:19.841 } 00:24:19.841 ], 00:24:19.841 "driver_specific": {} 00:24:19.841 } 00:24:19.841 ] 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:19.841 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.842 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.101 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.101 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.101 "name": "Existed_Raid", 00:24:20.101 "uuid": "c6519877-720c-4741-a253-b5b104a71768", 00:24:20.101 "strip_size_kb": 0, 00:24:20.101 "state": "online", 00:24:20.101 "raid_level": "raid1", 00:24:20.101 "superblock": true, 00:24:20.101 "num_base_bdevs": 2, 00:24:20.101 "num_base_bdevs_discovered": 2, 00:24:20.101 "num_base_bdevs_operational": 2, 00:24:20.101 "base_bdevs_list": [ 00:24:20.101 { 00:24:20.101 "name": "BaseBdev1", 00:24:20.101 "uuid": "bccb2545-f8a7-4951-b716-f964a85e4ee3", 00:24:20.101 "is_configured": true, 00:24:20.101 "data_offset": 2048, 00:24:20.101 "data_size": 63488 00:24:20.101 }, 00:24:20.101 { 00:24:20.101 "name": "BaseBdev2", 00:24:20.101 "uuid": "87844b10-8137-4b4b-b79c-3c0a343de0a7", 00:24:20.101 "is_configured": true, 00:24:20.101 "data_offset": 2048, 00:24:20.101 "data_size": 63488 00:24:20.101 } 00:24:20.101 ] 00:24:20.101 }' 00:24:20.101 17:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.101 17:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.362 [2024-11-26 17:20:50.358948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:20.362 "name": "Existed_Raid", 00:24:20.362 "aliases": [ 00:24:20.362 "c6519877-720c-4741-a253-b5b104a71768" 00:24:20.362 ], 00:24:20.362 "product_name": "Raid Volume", 00:24:20.362 "block_size": 512, 00:24:20.362 "num_blocks": 63488, 00:24:20.362 "uuid": "c6519877-720c-4741-a253-b5b104a71768", 00:24:20.362 "assigned_rate_limits": { 00:24:20.362 "rw_ios_per_sec": 0, 00:24:20.362 "rw_mbytes_per_sec": 0, 00:24:20.362 "r_mbytes_per_sec": 0, 00:24:20.362 "w_mbytes_per_sec": 0 00:24:20.362 }, 00:24:20.362 "claimed": false, 00:24:20.362 "zoned": false, 00:24:20.362 "supported_io_types": { 00:24:20.362 "read": true, 00:24:20.362 "write": true, 00:24:20.362 "unmap": false, 00:24:20.362 "flush": false, 00:24:20.362 "reset": true, 00:24:20.362 "nvme_admin": false, 00:24:20.362 "nvme_io": false, 00:24:20.362 "nvme_io_md": false, 00:24:20.362 "write_zeroes": true, 00:24:20.362 "zcopy": false, 00:24:20.362 "get_zone_info": false, 00:24:20.362 "zone_management": false, 00:24:20.362 "zone_append": false, 00:24:20.362 "compare": false, 00:24:20.362 "compare_and_write": false, 00:24:20.362 "abort": false, 00:24:20.362 "seek_hole": false, 00:24:20.362 "seek_data": false, 00:24:20.362 "copy": false, 00:24:20.362 "nvme_iov_md": false 00:24:20.362 }, 00:24:20.362 "memory_domains": [ 00:24:20.362 { 00:24:20.362 "dma_device_id": "system", 00:24:20.362 "dma_device_type": 1 00:24:20.362 }, 00:24:20.362 { 00:24:20.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.362 "dma_device_type": 2 00:24:20.362 }, 00:24:20.362 { 00:24:20.362 "dma_device_id": "system", 00:24:20.362 "dma_device_type": 1 00:24:20.362 }, 00:24:20.362 { 00:24:20.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.362 "dma_device_type": 2 00:24:20.362 } 00:24:20.362 ], 00:24:20.362 "driver_specific": { 00:24:20.362 "raid": { 00:24:20.362 "uuid": "c6519877-720c-4741-a253-b5b104a71768", 00:24:20.362 "strip_size_kb": 0, 00:24:20.362 "state": "online", 00:24:20.362 "raid_level": "raid1", 00:24:20.362 "superblock": true, 00:24:20.362 "num_base_bdevs": 2, 00:24:20.362 "num_base_bdevs_discovered": 2, 00:24:20.362 "num_base_bdevs_operational": 2, 00:24:20.362 "base_bdevs_list": [ 00:24:20.362 { 00:24:20.362 "name": "BaseBdev1", 00:24:20.362 "uuid": "bccb2545-f8a7-4951-b716-f964a85e4ee3", 00:24:20.362 "is_configured": true, 00:24:20.362 "data_offset": 2048, 00:24:20.362 "data_size": 63488 00:24:20.362 }, 00:24:20.362 { 00:24:20.362 "name": "BaseBdev2", 00:24:20.362 "uuid": "87844b10-8137-4b4b-b79c-3c0a343de0a7", 00:24:20.362 "is_configured": true, 00:24:20.362 "data_offset": 2048, 00:24:20.362 "data_size": 63488 00:24:20.362 } 00:24:20.362 ] 00:24:20.362 } 00:24:20.362 } 00:24:20.362 }' 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:20.362 BaseBdev2' 00:24:20.362 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 [2024-11-26 17:20:50.574459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.624 "name": "Existed_Raid", 00:24:20.624 "uuid": "c6519877-720c-4741-a253-b5b104a71768", 00:24:20.624 "strip_size_kb": 0, 00:24:20.624 "state": "online", 00:24:20.624 "raid_level": "raid1", 00:24:20.624 "superblock": true, 00:24:20.624 "num_base_bdevs": 2, 00:24:20.624 "num_base_bdevs_discovered": 1, 00:24:20.624 "num_base_bdevs_operational": 1, 00:24:20.624 "base_bdevs_list": [ 00:24:20.624 { 00:24:20.624 "name": null, 00:24:20.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.624 "is_configured": false, 00:24:20.624 "data_offset": 0, 00:24:20.624 "data_size": 63488 00:24:20.624 }, 00:24:20.624 { 00:24:20.624 "name": "BaseBdev2", 00:24:20.624 "uuid": "87844b10-8137-4b4b-b79c-3c0a343de0a7", 00:24:20.624 "is_configured": true, 00:24:20.624 "data_offset": 2048, 00:24:20.624 "data_size": 63488 00:24:20.624 } 00:24:20.624 ] 00:24:20.624 }' 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.624 17:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 [2024-11-26 17:20:51.159459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:21.191 [2024-11-26 17:20:51.159600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:21.191 [2024-11-26 17:20:51.256819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.191 [2024-11-26 17:20:51.256889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:21.191 [2024-11-26 17:20:51.256905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63063 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63063 ']' 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63063 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63063 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63063' 00:24:21.450 killing process with pid 63063 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63063 00:24:21.450 [2024-11-26 17:20:51.347262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:21.450 17:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63063 00:24:21.450 [2024-11-26 17:20:51.365219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:22.826 17:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:22.826 00:24:22.826 real 0m5.106s 00:24:22.826 user 0m7.211s 00:24:22.826 sys 0m1.004s 00:24:22.826 17:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.826 17:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.826 ************************************ 00:24:22.826 END TEST raid_state_function_test_sb 00:24:22.826 ************************************ 00:24:22.826 17:20:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:24:22.826 17:20:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:22.826 17:20:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.826 17:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:22.826 ************************************ 00:24:22.826 START TEST raid_superblock_test 00:24:22.826 ************************************ 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63312 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63312 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63312 ']' 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.826 17:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.826 [2024-11-26 17:20:52.724528] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:22.826 [2024-11-26 17:20:52.724677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63312 ] 00:24:22.826 [2024-11-26 17:20:52.899729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.084 [2024-11-26 17:20:53.043684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.343 [2024-11-26 17:20:53.254456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:23.343 [2024-11-26 17:20:53.254537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.601 malloc1 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.601 [2024-11-26 17:20:53.666094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:23.601 [2024-11-26 17:20:53.666166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.601 [2024-11-26 17:20:53.666194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:23.601 [2024-11-26 17:20:53.666207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.601 [2024-11-26 17:20:53.668853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.601 [2024-11-26 17:20:53.668893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:23.601 pt1 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.601 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.601 malloc2 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 [2024-11-26 17:20:53.720324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:23.860 [2024-11-26 17:20:53.720399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.860 [2024-11-26 17:20:53.720431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:23.860 [2024-11-26 17:20:53.720443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.860 [2024-11-26 17:20:53.723148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.860 [2024-11-26 17:20:53.723191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:23.860 pt2 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 [2024-11-26 17:20:53.732370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:23.860 [2024-11-26 17:20:53.734786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:23.860 [2024-11-26 17:20:53.734969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:23.860 [2024-11-26 17:20:53.734988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:23.860 [2024-11-26 17:20:53.735278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:23.860 [2024-11-26 17:20:53.735435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:23.860 [2024-11-26 17:20:53.735464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:23.860 [2024-11-26 17:20:53.735643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.860 "name": "raid_bdev1", 00:24:23.860 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:23.860 "strip_size_kb": 0, 00:24:23.860 "state": "online", 00:24:23.860 "raid_level": "raid1", 00:24:23.860 "superblock": true, 00:24:23.860 "num_base_bdevs": 2, 00:24:23.860 "num_base_bdevs_discovered": 2, 00:24:23.860 "num_base_bdevs_operational": 2, 00:24:23.860 "base_bdevs_list": [ 00:24:23.860 { 00:24:23.860 "name": "pt1", 00:24:23.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:23.860 "is_configured": true, 00:24:23.860 "data_offset": 2048, 00:24:23.860 "data_size": 63488 00:24:23.860 }, 00:24:23.860 { 00:24:23.860 "name": "pt2", 00:24:23.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:23.860 "is_configured": true, 00:24:23.860 "data_offset": 2048, 00:24:23.860 "data_size": 63488 00:24:23.860 } 00:24:23.860 ] 00:24:23.860 }' 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.860 17:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.118 [2024-11-26 17:20:54.152034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.118 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:24.118 "name": "raid_bdev1", 00:24:24.118 "aliases": [ 00:24:24.118 "9b4a947e-9649-4475-9022-594c2f0d5ca0" 00:24:24.118 ], 00:24:24.118 "product_name": "Raid Volume", 00:24:24.118 "block_size": 512, 00:24:24.118 "num_blocks": 63488, 00:24:24.118 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:24.118 "assigned_rate_limits": { 00:24:24.118 "rw_ios_per_sec": 0, 00:24:24.118 "rw_mbytes_per_sec": 0, 00:24:24.118 "r_mbytes_per_sec": 0, 00:24:24.118 "w_mbytes_per_sec": 0 00:24:24.118 }, 00:24:24.118 "claimed": false, 00:24:24.118 "zoned": false, 00:24:24.118 "supported_io_types": { 00:24:24.118 "read": true, 00:24:24.118 "write": true, 00:24:24.118 "unmap": false, 00:24:24.118 "flush": false, 00:24:24.118 "reset": true, 00:24:24.118 "nvme_admin": false, 00:24:24.118 "nvme_io": false, 00:24:24.118 "nvme_io_md": false, 00:24:24.118 "write_zeroes": true, 00:24:24.118 "zcopy": false, 00:24:24.118 "get_zone_info": false, 00:24:24.118 "zone_management": false, 00:24:24.118 "zone_append": false, 00:24:24.118 "compare": false, 00:24:24.118 "compare_and_write": false, 00:24:24.118 "abort": false, 00:24:24.118 "seek_hole": false, 00:24:24.118 "seek_data": false, 00:24:24.118 "copy": false, 00:24:24.118 "nvme_iov_md": false 00:24:24.119 }, 00:24:24.119 "memory_domains": [ 00:24:24.119 { 00:24:24.119 "dma_device_id": "system", 00:24:24.119 "dma_device_type": 1 00:24:24.119 }, 00:24:24.119 { 00:24:24.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.119 "dma_device_type": 2 00:24:24.119 }, 00:24:24.119 { 00:24:24.119 "dma_device_id": "system", 00:24:24.119 "dma_device_type": 1 00:24:24.119 }, 00:24:24.119 { 00:24:24.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.119 "dma_device_type": 2 00:24:24.119 } 00:24:24.119 ], 00:24:24.119 "driver_specific": { 00:24:24.119 "raid": { 00:24:24.119 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:24.119 "strip_size_kb": 0, 00:24:24.119 "state": "online", 00:24:24.119 "raid_level": "raid1", 00:24:24.119 "superblock": true, 00:24:24.119 "num_base_bdevs": 2, 00:24:24.119 "num_base_bdevs_discovered": 2, 00:24:24.119 "num_base_bdevs_operational": 2, 00:24:24.119 "base_bdevs_list": [ 00:24:24.119 { 00:24:24.119 "name": "pt1", 00:24:24.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:24.119 "is_configured": true, 00:24:24.119 "data_offset": 2048, 00:24:24.119 "data_size": 63488 00:24:24.119 }, 00:24:24.119 { 00:24:24.119 "name": "pt2", 00:24:24.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:24.119 "is_configured": true, 00:24:24.119 "data_offset": 2048, 00:24:24.119 "data_size": 63488 00:24:24.119 } 00:24:24.119 ] 00:24:24.119 } 00:24:24.119 } 00:24:24.119 }' 00:24:24.119 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:24.119 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:24.119 pt2' 00:24:24.119 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 [2024-11-26 17:20:54.383790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9b4a947e-9649-4475-9022-594c2f0d5ca0 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9b4a947e-9649-4475-9022-594c2f0d5ca0 ']' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 [2024-11-26 17:20:54.423382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.377 [2024-11-26 17:20:54.423416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.377 [2024-11-26 17:20:54.423530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.377 [2024-11-26 17:20:54.423603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.377 [2024-11-26 17:20:54.423624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:24.377 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 [2024-11-26 17:20:54.539262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:24.636 [2024-11-26 17:20:54.541604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:24.636 [2024-11-26 17:20:54.541680] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:24.636 [2024-11-26 17:20:54.541740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:24.636 [2024-11-26 17:20:54.541760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.636 [2024-11-26 17:20:54.541773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:24.636 request: 00:24:24.636 { 00:24:24.636 "name": "raid_bdev1", 00:24:24.636 "raid_level": "raid1", 00:24:24.636 "base_bdevs": [ 00:24:24.636 "malloc1", 00:24:24.636 "malloc2" 00:24:24.636 ], 00:24:24.636 "superblock": false, 00:24:24.636 "method": "bdev_raid_create", 00:24:24.636 "req_id": 1 00:24:24.636 } 00:24:24.636 Got JSON-RPC error response 00:24:24.636 response: 00:24:24.636 { 00:24:24.636 "code": -17, 00:24:24.636 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:24.636 } 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 [2024-11-26 17:20:54.603134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:24.636 [2024-11-26 17:20:54.603204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.636 [2024-11-26 17:20:54.603230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:24.636 [2024-11-26 17:20:54.603246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.636 [2024-11-26 17:20:54.605946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.636 [2024-11-26 17:20:54.605987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:24.636 [2024-11-26 17:20:54.606078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:24.636 [2024-11-26 17:20:54.606137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:24.636 pt1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.636 "name": "raid_bdev1", 00:24:24.636 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:24.636 "strip_size_kb": 0, 00:24:24.636 "state": "configuring", 00:24:24.636 "raid_level": "raid1", 00:24:24.636 "superblock": true, 00:24:24.636 "num_base_bdevs": 2, 00:24:24.636 "num_base_bdevs_discovered": 1, 00:24:24.636 "num_base_bdevs_operational": 2, 00:24:24.636 "base_bdevs_list": [ 00:24:24.636 { 00:24:24.636 "name": "pt1", 00:24:24.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:24.636 "is_configured": true, 00:24:24.636 "data_offset": 2048, 00:24:24.636 "data_size": 63488 00:24:24.636 }, 00:24:24.636 { 00:24:24.636 "name": null, 00:24:24.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:24.636 "is_configured": false, 00:24:24.636 "data_offset": 2048, 00:24:24.636 "data_size": 63488 00:24:24.636 } 00:24:24.636 ] 00:24:24.636 }' 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.636 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.895 [2024-11-26 17:20:54.990704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:24.895 [2024-11-26 17:20:54.990799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.895 [2024-11-26 17:20:54.990826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:24.895 [2024-11-26 17:20:54.990841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.895 [2024-11-26 17:20:54.991359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.895 [2024-11-26 17:20:54.991391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:24.895 [2024-11-26 17:20:54.991492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:24.895 [2024-11-26 17:20:54.991538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:24.895 [2024-11-26 17:20:54.991667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:24.895 [2024-11-26 17:20:54.991682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:24.895 [2024-11-26 17:20:54.991951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:24.895 [2024-11-26 17:20:54.992119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:24.895 [2024-11-26 17:20:54.992129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:24.895 [2024-11-26 17:20:54.992266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.895 pt2 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.895 17:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.895 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.895 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.153 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.153 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.153 "name": "raid_bdev1", 00:24:25.153 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:25.153 "strip_size_kb": 0, 00:24:25.153 "state": "online", 00:24:25.153 "raid_level": "raid1", 00:24:25.153 "superblock": true, 00:24:25.153 "num_base_bdevs": 2, 00:24:25.153 "num_base_bdevs_discovered": 2, 00:24:25.153 "num_base_bdevs_operational": 2, 00:24:25.153 "base_bdevs_list": [ 00:24:25.153 { 00:24:25.153 "name": "pt1", 00:24:25.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:25.153 "is_configured": true, 00:24:25.153 "data_offset": 2048, 00:24:25.153 "data_size": 63488 00:24:25.153 }, 00:24:25.153 { 00:24:25.153 "name": "pt2", 00:24:25.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.153 "is_configured": true, 00:24:25.153 "data_offset": 2048, 00:24:25.153 "data_size": 63488 00:24:25.153 } 00:24:25.153 ] 00:24:25.153 }' 00:24:25.153 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.153 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.411 [2024-11-26 17:20:55.422882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.411 "name": "raid_bdev1", 00:24:25.411 "aliases": [ 00:24:25.411 "9b4a947e-9649-4475-9022-594c2f0d5ca0" 00:24:25.411 ], 00:24:25.411 "product_name": "Raid Volume", 00:24:25.411 "block_size": 512, 00:24:25.411 "num_blocks": 63488, 00:24:25.411 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:25.411 "assigned_rate_limits": { 00:24:25.411 "rw_ios_per_sec": 0, 00:24:25.411 "rw_mbytes_per_sec": 0, 00:24:25.411 "r_mbytes_per_sec": 0, 00:24:25.411 "w_mbytes_per_sec": 0 00:24:25.411 }, 00:24:25.411 "claimed": false, 00:24:25.411 "zoned": false, 00:24:25.411 "supported_io_types": { 00:24:25.411 "read": true, 00:24:25.411 "write": true, 00:24:25.411 "unmap": false, 00:24:25.411 "flush": false, 00:24:25.411 "reset": true, 00:24:25.411 "nvme_admin": false, 00:24:25.411 "nvme_io": false, 00:24:25.411 "nvme_io_md": false, 00:24:25.411 "write_zeroes": true, 00:24:25.411 "zcopy": false, 00:24:25.411 "get_zone_info": false, 00:24:25.411 "zone_management": false, 00:24:25.411 "zone_append": false, 00:24:25.411 "compare": false, 00:24:25.411 "compare_and_write": false, 00:24:25.411 "abort": false, 00:24:25.411 "seek_hole": false, 00:24:25.411 "seek_data": false, 00:24:25.411 "copy": false, 00:24:25.411 "nvme_iov_md": false 00:24:25.411 }, 00:24:25.411 "memory_domains": [ 00:24:25.411 { 00:24:25.411 "dma_device_id": "system", 00:24:25.411 "dma_device_type": 1 00:24:25.411 }, 00:24:25.411 { 00:24:25.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.411 "dma_device_type": 2 00:24:25.411 }, 00:24:25.411 { 00:24:25.411 "dma_device_id": "system", 00:24:25.411 "dma_device_type": 1 00:24:25.411 }, 00:24:25.411 { 00:24:25.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.411 "dma_device_type": 2 00:24:25.411 } 00:24:25.411 ], 00:24:25.411 "driver_specific": { 00:24:25.411 "raid": { 00:24:25.411 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:25.411 "strip_size_kb": 0, 00:24:25.411 "state": "online", 00:24:25.411 "raid_level": "raid1", 00:24:25.411 "superblock": true, 00:24:25.411 "num_base_bdevs": 2, 00:24:25.411 "num_base_bdevs_discovered": 2, 00:24:25.411 "num_base_bdevs_operational": 2, 00:24:25.411 "base_bdevs_list": [ 00:24:25.411 { 00:24:25.411 "name": "pt1", 00:24:25.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:25.411 "is_configured": true, 00:24:25.411 "data_offset": 2048, 00:24:25.411 "data_size": 63488 00:24:25.411 }, 00:24:25.411 { 00:24:25.411 "name": "pt2", 00:24:25.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.411 "is_configured": true, 00:24:25.411 "data_offset": 2048, 00:24:25.411 "data_size": 63488 00:24:25.411 } 00:24:25.411 ] 00:24:25.411 } 00:24:25.411 } 00:24:25.411 }' 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:25.411 pt2' 00:24:25.411 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:25.711 [2024-11-26 17:20:55.610602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9b4a947e-9649-4475-9022-594c2f0d5ca0 '!=' 9b4a947e-9649-4475-9022-594c2f0d5ca0 ']' 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.711 [2024-11-26 17:20:55.654334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.711 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.712 "name": "raid_bdev1", 00:24:25.712 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:25.712 "strip_size_kb": 0, 00:24:25.712 "state": "online", 00:24:25.712 "raid_level": "raid1", 00:24:25.712 "superblock": true, 00:24:25.712 "num_base_bdevs": 2, 00:24:25.712 "num_base_bdevs_discovered": 1, 00:24:25.712 "num_base_bdevs_operational": 1, 00:24:25.712 "base_bdevs_list": [ 00:24:25.712 { 00:24:25.712 "name": null, 00:24:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.712 "is_configured": false, 00:24:25.712 "data_offset": 0, 00:24:25.712 "data_size": 63488 00:24:25.712 }, 00:24:25.712 { 00:24:25.712 "name": "pt2", 00:24:25.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.712 "is_configured": true, 00:24:25.712 "data_offset": 2048, 00:24:25.712 "data_size": 63488 00:24:25.712 } 00:24:25.712 ] 00:24:25.712 }' 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.712 17:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 [2024-11-26 17:20:56.097706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:26.290 [2024-11-26 17:20:56.097887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:26.290 [2024-11-26 17:20:56.098019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:26.290 [2024-11-26 17:20:56.098078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:26.290 [2024-11-26 17:20:56.098095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 [2024-11-26 17:20:56.169583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:26.290 [2024-11-26 17:20:56.169755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.290 [2024-11-26 17:20:56.169848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:26.290 [2024-11-26 17:20:56.169925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.290 [2024-11-26 17:20:56.172631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.290 [2024-11-26 17:20:56.172766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:26.290 [2024-11-26 17:20:56.172927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:26.290 [2024-11-26 17:20:56.173014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:26.290 [2024-11-26 17:20:56.173157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:26.290 [2024-11-26 17:20:56.173177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:26.290 [2024-11-26 17:20:56.173436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:26.290 [2024-11-26 17:20:56.173609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:26.290 [2024-11-26 17:20:56.173621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:26.290 [2024-11-26 17:20:56.173823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.290 pt2 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.290 "name": "raid_bdev1", 00:24:26.290 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:26.290 "strip_size_kb": 0, 00:24:26.290 "state": "online", 00:24:26.290 "raid_level": "raid1", 00:24:26.290 "superblock": true, 00:24:26.290 "num_base_bdevs": 2, 00:24:26.290 "num_base_bdevs_discovered": 1, 00:24:26.290 "num_base_bdevs_operational": 1, 00:24:26.290 "base_bdevs_list": [ 00:24:26.290 { 00:24:26.290 "name": null, 00:24:26.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.290 "is_configured": false, 00:24:26.290 "data_offset": 2048, 00:24:26.290 "data_size": 63488 00:24:26.290 }, 00:24:26.290 { 00:24:26.290 "name": "pt2", 00:24:26.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:26.290 "is_configured": true, 00:24:26.290 "data_offset": 2048, 00:24:26.290 "data_size": 63488 00:24:26.290 } 00:24:26.290 ] 00:24:26.290 }' 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.290 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 [2024-11-26 17:20:56.573570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:26.549 [2024-11-26 17:20:56.573729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:26.549 [2024-11-26 17:20:56.573895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:26.549 [2024-11-26 17:20:56.573987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:26.549 [2024-11-26 17:20:56.574233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 [2024-11-26 17:20:56.629571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:26.549 [2024-11-26 17:20:56.629629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.549 [2024-11-26 17:20:56.629653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:26.549 [2024-11-26 17:20:56.629664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.549 [2024-11-26 17:20:56.632350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.549 [2024-11-26 17:20:56.632389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:26.549 [2024-11-26 17:20:56.632480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:26.549 [2024-11-26 17:20:56.632548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:26.549 [2024-11-26 17:20:56.632694] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:26.549 [2024-11-26 17:20:56.632707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:26.549 [2024-11-26 17:20:56.632726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:26.549 [2024-11-26 17:20:56.632784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:26.549 [2024-11-26 17:20:56.632852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:26.549 [2024-11-26 17:20:56.632862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:26.549 [2024-11-26 17:20:56.633129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:26.549 [2024-11-26 17:20:56.633275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:26.549 [2024-11-26 17:20:56.633290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:26.549 [2024-11-26 17:20:56.633448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.549 pt1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.549 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.807 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.807 "name": "raid_bdev1", 00:24:26.807 "uuid": "9b4a947e-9649-4475-9022-594c2f0d5ca0", 00:24:26.807 "strip_size_kb": 0, 00:24:26.807 "state": "online", 00:24:26.807 "raid_level": "raid1", 00:24:26.807 "superblock": true, 00:24:26.807 "num_base_bdevs": 2, 00:24:26.807 "num_base_bdevs_discovered": 1, 00:24:26.807 "num_base_bdevs_operational": 1, 00:24:26.807 "base_bdevs_list": [ 00:24:26.807 { 00:24:26.807 "name": null, 00:24:26.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.807 "is_configured": false, 00:24:26.807 "data_offset": 2048, 00:24:26.807 "data_size": 63488 00:24:26.807 }, 00:24:26.807 { 00:24:26.807 "name": "pt2", 00:24:26.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:26.807 "is_configured": true, 00:24:26.807 "data_offset": 2048, 00:24:26.807 "data_size": 63488 00:24:26.807 } 00:24:26.807 ] 00:24:26.807 }' 00:24:26.807 17:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.807 17:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.065 [2024-11-26 17:20:57.129148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9b4a947e-9649-4475-9022-594c2f0d5ca0 '!=' 9b4a947e-9649-4475-9022-594c2f0d5ca0 ']' 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63312 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63312 ']' 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63312 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:24:27.065 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63312 00:24:27.324 killing process with pid 63312 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63312' 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63312 00:24:27.324 [2024-11-26 17:20:57.208218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.324 [2024-11-26 17:20:57.208336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.324 17:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63312 00:24:27.324 [2024-11-26 17:20:57.208391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.324 [2024-11-26 17:20:57.208411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:27.324 [2024-11-26 17:20:57.416519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.707 17:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:28.707 ************************************ 00:24:28.707 END TEST raid_superblock_test 00:24:28.707 ************************************ 00:24:28.707 00:24:28.707 real 0m5.983s 00:24:28.707 user 0m8.926s 00:24:28.707 sys 0m1.210s 00:24:28.707 17:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.707 17:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:28.707 17:20:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:24:28.707 17:20:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:28.707 17:20:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.707 17:20:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.707 ************************************ 00:24:28.707 START TEST raid_read_error_test 00:24:28.707 ************************************ 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QHMF9MnfHa 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63636 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63636 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63636 ']' 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.707 17:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:28.707 [2024-11-26 17:20:58.798581] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:28.707 [2024-11-26 17:20:58.798714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63636 ] 00:24:28.966 [2024-11-26 17:20:58.983178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.225 [2024-11-26 17:20:59.123013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.483 [2024-11-26 17:20:59.346620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.483 [2024-11-26 17:20:59.346664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 BaseBdev1_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 true 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 [2024-11-26 17:20:59.692664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:29.743 [2024-11-26 17:20:59.692861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.743 [2024-11-26 17:20:59.692897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:29.743 [2024-11-26 17:20:59.692912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.743 [2024-11-26 17:20:59.695684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.743 [2024-11-26 17:20:59.695728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:29.743 BaseBdev1 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 BaseBdev2_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 true 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 [2024-11-26 17:20:59.761321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:29.743 [2024-11-26 17:20:59.761387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.743 [2024-11-26 17:20:59.761407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:29.743 [2024-11-26 17:20:59.761421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.743 [2024-11-26 17:20:59.764015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.743 [2024-11-26 17:20:59.764057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:29.743 BaseBdev2 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 [2024-11-26 17:20:59.773373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.743 [2024-11-26 17:20:59.775726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:29.743 [2024-11-26 17:20:59.775934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:29.743 [2024-11-26 17:20:59.775952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:29.743 [2024-11-26 17:20:59.776213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:29.743 [2024-11-26 17:20:59.776385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:29.743 [2024-11-26 17:20:59.776404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:29.743 [2024-11-26 17:20:59.776584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.743 "name": "raid_bdev1", 00:24:29.744 "uuid": "aa4bc70c-5bd3-40ed-9d60-7023a7c45307", 00:24:29.744 "strip_size_kb": 0, 00:24:29.744 "state": "online", 00:24:29.744 "raid_level": "raid1", 00:24:29.744 "superblock": true, 00:24:29.744 "num_base_bdevs": 2, 00:24:29.744 "num_base_bdevs_discovered": 2, 00:24:29.744 "num_base_bdevs_operational": 2, 00:24:29.744 "base_bdevs_list": [ 00:24:29.744 { 00:24:29.744 "name": "BaseBdev1", 00:24:29.744 "uuid": "bbc0447d-6074-514f-bb2a-170d6596a0a6", 00:24:29.744 "is_configured": true, 00:24:29.744 "data_offset": 2048, 00:24:29.744 "data_size": 63488 00:24:29.744 }, 00:24:29.744 { 00:24:29.744 "name": "BaseBdev2", 00:24:29.744 "uuid": "67ae66e9-e903-54b8-bf57-8a508fab5e6b", 00:24:29.744 "is_configured": true, 00:24:29.744 "data_offset": 2048, 00:24:29.744 "data_size": 63488 00:24:29.744 } 00:24:29.744 ] 00:24:29.744 }' 00:24:29.744 17:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.744 17:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.338 17:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:30.338 17:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:30.339 [2024-11-26 17:21:00.338170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.278 "name": "raid_bdev1", 00:24:31.278 "uuid": "aa4bc70c-5bd3-40ed-9d60-7023a7c45307", 00:24:31.278 "strip_size_kb": 0, 00:24:31.278 "state": "online", 00:24:31.278 "raid_level": "raid1", 00:24:31.278 "superblock": true, 00:24:31.278 "num_base_bdevs": 2, 00:24:31.278 "num_base_bdevs_discovered": 2, 00:24:31.278 "num_base_bdevs_operational": 2, 00:24:31.278 "base_bdevs_list": [ 00:24:31.278 { 00:24:31.278 "name": "BaseBdev1", 00:24:31.278 "uuid": "bbc0447d-6074-514f-bb2a-170d6596a0a6", 00:24:31.278 "is_configured": true, 00:24:31.278 "data_offset": 2048, 00:24:31.278 "data_size": 63488 00:24:31.278 }, 00:24:31.278 { 00:24:31.278 "name": "BaseBdev2", 00:24:31.278 "uuid": "67ae66e9-e903-54b8-bf57-8a508fab5e6b", 00:24:31.278 "is_configured": true, 00:24:31.278 "data_offset": 2048, 00:24:31.278 "data_size": 63488 00:24:31.278 } 00:24:31.278 ] 00:24:31.278 }' 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.278 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.847 [2024-11-26 17:21:01.716993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.847 [2024-11-26 17:21:01.717038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:31.847 [2024-11-26 17:21:01.719702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:31.847 [2024-11-26 17:21:01.719758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.847 [2024-11-26 17:21:01.719847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:31.847 [2024-11-26 17:21:01.719863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:31.847 { 00:24:31.847 "results": [ 00:24:31.847 { 00:24:31.847 "job": "raid_bdev1", 00:24:31.847 "core_mask": "0x1", 00:24:31.847 "workload": "randrw", 00:24:31.847 "percentage": 50, 00:24:31.847 "status": "finished", 00:24:31.847 "queue_depth": 1, 00:24:31.847 "io_size": 131072, 00:24:31.847 "runtime": 1.37847, 00:24:31.847 "iops": 17899.555304069003, 00:24:31.847 "mibps": 2237.4444130086254, 00:24:31.847 "io_failed": 0, 00:24:31.847 "io_timeout": 0, 00:24:31.847 "avg_latency_us": 53.1014934342216, 00:24:31.847 "min_latency_us": 23.852208835341365, 00:24:31.847 "max_latency_us": 1506.8016064257029 00:24:31.847 } 00:24:31.847 ], 00:24:31.847 "core_count": 1 00:24:31.847 } 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63636 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63636 ']' 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63636 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63636 00:24:31.847 killing process with pid 63636 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63636' 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63636 00:24:31.847 [2024-11-26 17:21:01.760891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:31.847 17:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63636 00:24:31.847 [2024-11-26 17:21:01.898882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QHMF9MnfHa 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:33.230 00:24:33.230 real 0m4.482s 00:24:33.230 user 0m5.301s 00:24:33.230 sys 0m0.661s 00:24:33.230 ************************************ 00:24:33.230 END TEST raid_read_error_test 00:24:33.230 ************************************ 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.230 17:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.230 17:21:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:24:33.230 17:21:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:33.230 17:21:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.230 17:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.230 ************************************ 00:24:33.230 START TEST raid_write_error_test 00:24:33.230 ************************************ 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p1cjNqefwt 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63782 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63782 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63782 ']' 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.230 17:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 [2024-11-26 17:21:03.356503] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:33.490 [2024-11-26 17:21:03.356650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63782 ] 00:24:33.490 [2024-11-26 17:21:03.540077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.749 [2024-11-26 17:21:03.686360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.033 [2024-11-26 17:21:03.905861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.033 [2024-11-26 17:21:03.906109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.291 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.291 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:34.291 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 BaseBdev1_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 true 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 [2024-11-26 17:21:04.260143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:34.292 [2024-11-26 17:21:04.260359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.292 [2024-11-26 17:21:04.260396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:34.292 [2024-11-26 17:21:04.260412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.292 [2024-11-26 17:21:04.263126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.292 [2024-11-26 17:21:04.263173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:34.292 BaseBdev1 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 BaseBdev2_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 true 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 [2024-11-26 17:21:04.326331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:34.292 [2024-11-26 17:21:04.326402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.292 [2024-11-26 17:21:04.326424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:34.292 [2024-11-26 17:21:04.326440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.292 [2024-11-26 17:21:04.329030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.292 [2024-11-26 17:21:04.329074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:34.292 BaseBdev2 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 [2024-11-26 17:21:04.338369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:34.292 [2024-11-26 17:21:04.340829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:34.292 [2024-11-26 17:21:04.341051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:34.292 [2024-11-26 17:21:04.341069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:34.292 [2024-11-26 17:21:04.341348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:34.292 [2024-11-26 17:21:04.341585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:34.292 [2024-11-26 17:21:04.341599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:34.292 [2024-11-26 17:21:04.341780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.292 "name": "raid_bdev1", 00:24:34.292 "uuid": "3f995b8c-23e1-44bc-80a9-d9eed5ab71a3", 00:24:34.292 "strip_size_kb": 0, 00:24:34.292 "state": "online", 00:24:34.292 "raid_level": "raid1", 00:24:34.292 "superblock": true, 00:24:34.292 "num_base_bdevs": 2, 00:24:34.292 "num_base_bdevs_discovered": 2, 00:24:34.292 "num_base_bdevs_operational": 2, 00:24:34.292 "base_bdevs_list": [ 00:24:34.292 { 00:24:34.292 "name": "BaseBdev1", 00:24:34.292 "uuid": "d1c83bdf-bc99-5bc1-9ced-89e6aca97d9a", 00:24:34.292 "is_configured": true, 00:24:34.292 "data_offset": 2048, 00:24:34.292 "data_size": 63488 00:24:34.292 }, 00:24:34.292 { 00:24:34.292 "name": "BaseBdev2", 00:24:34.292 "uuid": "58540e7c-31fc-5414-bbbe-3abdd501a5b3", 00:24:34.292 "is_configured": true, 00:24:34.292 "data_offset": 2048, 00:24:34.292 "data_size": 63488 00:24:34.292 } 00:24:34.292 ] 00:24:34.292 }' 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.292 17:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.858 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:34.858 17:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:34.858 [2024-11-26 17:21:04.839195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.793 [2024-11-26 17:21:05.752045] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:35.793 [2024-11-26 17:21:05.752113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:35.793 [2024-11-26 17:21:05.752322] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.793 "name": "raid_bdev1", 00:24:35.793 "uuid": "3f995b8c-23e1-44bc-80a9-d9eed5ab71a3", 00:24:35.793 "strip_size_kb": 0, 00:24:35.793 "state": "online", 00:24:35.793 "raid_level": "raid1", 00:24:35.793 "superblock": true, 00:24:35.793 "num_base_bdevs": 2, 00:24:35.793 "num_base_bdevs_discovered": 1, 00:24:35.793 "num_base_bdevs_operational": 1, 00:24:35.793 "base_bdevs_list": [ 00:24:35.793 { 00:24:35.793 "name": null, 00:24:35.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.793 "is_configured": false, 00:24:35.793 "data_offset": 0, 00:24:35.793 "data_size": 63488 00:24:35.793 }, 00:24:35.793 { 00:24:35.793 "name": "BaseBdev2", 00:24:35.793 "uuid": "58540e7c-31fc-5414-bbbe-3abdd501a5b3", 00:24:35.793 "is_configured": true, 00:24:35.793 "data_offset": 2048, 00:24:35.793 "data_size": 63488 00:24:35.793 } 00:24:35.793 ] 00:24:35.793 }' 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.793 17:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.051 17:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.052 [2024-11-26 17:21:06.124671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:36.052 [2024-11-26 17:21:06.124707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.052 [2024-11-26 17:21:06.127278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.052 [2024-11-26 17:21:06.127322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.052 [2024-11-26 17:21:06.127386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.052 [2024-11-26 17:21:06.127401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.052 { 00:24:36.052 "results": [ 00:24:36.052 { 00:24:36.052 "job": "raid_bdev1", 00:24:36.052 "core_mask": "0x1", 00:24:36.052 "workload": "randrw", 00:24:36.052 "percentage": 50, 00:24:36.052 "status": "finished", 00:24:36.052 "queue_depth": 1, 00:24:36.052 "io_size": 131072, 00:24:36.052 "runtime": 1.285039, 00:24:36.052 "iops": 20254.6381860784, 00:24:36.052 "mibps": 2531.8297732598, 00:24:36.052 "io_failed": 0, 00:24:36.052 "io_timeout": 0, 00:24:36.052 "avg_latency_us": 46.73211117097867, 00:24:36.052 "min_latency_us": 23.02971887550201, 00:24:36.052 "max_latency_us": 1434.4224899598394 00:24:36.052 } 00:24:36.052 ], 00:24:36.052 "core_count": 1 00:24:36.052 } 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63782 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63782 ']' 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63782 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.052 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63782 00:24:36.310 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.310 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.310 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63782' 00:24:36.310 killing process with pid 63782 00:24:36.310 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63782 00:24:36.310 [2024-11-26 17:21:06.180843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:36.310 17:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63782 00:24:36.310 [2024-11-26 17:21:06.326099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p1cjNqefwt 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:37.686 00:24:37.686 real 0m4.339s 00:24:37.686 user 0m5.013s 00:24:37.686 sys 0m0.658s 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.686 ************************************ 00:24:37.686 END TEST raid_write_error_test 00:24:37.686 ************************************ 00:24:37.686 17:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.686 17:21:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:24:37.686 17:21:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:37.686 17:21:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:24:37.686 17:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:37.686 17:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.686 17:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.686 ************************************ 00:24:37.686 START TEST raid_state_function_test 00:24:37.686 ************************************ 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63920 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63920' 00:24:37.686 Process raid pid: 63920 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63920 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63920 ']' 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.686 17:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.687 [2024-11-26 17:21:07.767154] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:37.687 [2024-11-26 17:21:07.767482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.946 [2024-11-26 17:21:07.951972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.205 [2024-11-26 17:21:08.102922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.464 [2024-11-26 17:21:08.343376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.464 [2024-11-26 17:21:08.343426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 [2024-11-26 17:21:08.607115] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:38.724 [2024-11-26 17:21:08.607185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:38.724 [2024-11-26 17:21:08.607198] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:38.724 [2024-11-26 17:21:08.607212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:38.724 [2024-11-26 17:21:08.607220] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:38.724 [2024-11-26 17:21:08.607232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.724 "name": "Existed_Raid", 00:24:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.724 "strip_size_kb": 64, 00:24:38.724 "state": "configuring", 00:24:38.724 "raid_level": "raid0", 00:24:38.724 "superblock": false, 00:24:38.724 "num_base_bdevs": 3, 00:24:38.724 "num_base_bdevs_discovered": 0, 00:24:38.724 "num_base_bdevs_operational": 3, 00:24:38.724 "base_bdevs_list": [ 00:24:38.724 { 00:24:38.724 "name": "BaseBdev1", 00:24:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.724 "is_configured": false, 00:24:38.724 "data_offset": 0, 00:24:38.724 "data_size": 0 00:24:38.724 }, 00:24:38.724 { 00:24:38.724 "name": "BaseBdev2", 00:24:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.724 "is_configured": false, 00:24:38.724 "data_offset": 0, 00:24:38.724 "data_size": 0 00:24:38.724 }, 00:24:38.724 { 00:24:38.724 "name": "BaseBdev3", 00:24:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.724 "is_configured": false, 00:24:38.724 "data_offset": 0, 00:24:38.724 "data_size": 0 00:24:38.724 } 00:24:38.724 ] 00:24:38.724 }' 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.724 17:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.986 [2024-11-26 17:21:09.030495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:38.986 [2024-11-26 17:21:09.030677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.986 [2024-11-26 17:21:09.038457] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:38.986 [2024-11-26 17:21:09.038513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:38.986 [2024-11-26 17:21:09.038535] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:38.986 [2024-11-26 17:21:09.038550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:38.986 [2024-11-26 17:21:09.038558] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:38.986 [2024-11-26 17:21:09.038571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.986 [2024-11-26 17:21:09.086312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.986 BaseBdev1 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.986 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.307 [ 00:24:39.307 { 00:24:39.307 "name": "BaseBdev1", 00:24:39.307 "aliases": [ 00:24:39.307 "c422d48b-09a7-42e2-8dc8-5272802469aa" 00:24:39.307 ], 00:24:39.307 "product_name": "Malloc disk", 00:24:39.307 "block_size": 512, 00:24:39.307 "num_blocks": 65536, 00:24:39.307 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:39.307 "assigned_rate_limits": { 00:24:39.307 "rw_ios_per_sec": 0, 00:24:39.307 "rw_mbytes_per_sec": 0, 00:24:39.307 "r_mbytes_per_sec": 0, 00:24:39.307 "w_mbytes_per_sec": 0 00:24:39.307 }, 00:24:39.307 "claimed": true, 00:24:39.307 "claim_type": "exclusive_write", 00:24:39.307 "zoned": false, 00:24:39.307 "supported_io_types": { 00:24:39.307 "read": true, 00:24:39.307 "write": true, 00:24:39.307 "unmap": true, 00:24:39.307 "flush": true, 00:24:39.307 "reset": true, 00:24:39.307 "nvme_admin": false, 00:24:39.307 "nvme_io": false, 00:24:39.307 "nvme_io_md": false, 00:24:39.307 "write_zeroes": true, 00:24:39.307 "zcopy": true, 00:24:39.307 "get_zone_info": false, 00:24:39.307 "zone_management": false, 00:24:39.307 "zone_append": false, 00:24:39.307 "compare": false, 00:24:39.307 "compare_and_write": false, 00:24:39.307 "abort": true, 00:24:39.307 "seek_hole": false, 00:24:39.307 "seek_data": false, 00:24:39.307 "copy": true, 00:24:39.307 "nvme_iov_md": false 00:24:39.307 }, 00:24:39.307 "memory_domains": [ 00:24:39.307 { 00:24:39.307 "dma_device_id": "system", 00:24:39.307 "dma_device_type": 1 00:24:39.307 }, 00:24:39.307 { 00:24:39.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.307 "dma_device_type": 2 00:24:39.307 } 00:24:39.307 ], 00:24:39.307 "driver_specific": {} 00:24:39.307 } 00:24:39.307 ] 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.307 "name": "Existed_Raid", 00:24:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.307 "strip_size_kb": 64, 00:24:39.307 "state": "configuring", 00:24:39.307 "raid_level": "raid0", 00:24:39.307 "superblock": false, 00:24:39.307 "num_base_bdevs": 3, 00:24:39.307 "num_base_bdevs_discovered": 1, 00:24:39.307 "num_base_bdevs_operational": 3, 00:24:39.307 "base_bdevs_list": [ 00:24:39.307 { 00:24:39.307 "name": "BaseBdev1", 00:24:39.307 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:39.307 "is_configured": true, 00:24:39.307 "data_offset": 0, 00:24:39.307 "data_size": 65536 00:24:39.307 }, 00:24:39.307 { 00:24:39.307 "name": "BaseBdev2", 00:24:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.307 "is_configured": false, 00:24:39.307 "data_offset": 0, 00:24:39.307 "data_size": 0 00:24:39.307 }, 00:24:39.307 { 00:24:39.307 "name": "BaseBdev3", 00:24:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.307 "is_configured": false, 00:24:39.307 "data_offset": 0, 00:24:39.307 "data_size": 0 00:24:39.307 } 00:24:39.307 ] 00:24:39.307 }' 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.307 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.581 [2024-11-26 17:21:09.573687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:39.581 [2024-11-26 17:21:09.573756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.581 [2024-11-26 17:21:09.581749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.581 [2024-11-26 17:21:09.584057] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:39.581 [2024-11-26 17:21:09.584106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:39.581 [2024-11-26 17:21:09.584119] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:39.581 [2024-11-26 17:21:09.584133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.581 "name": "Existed_Raid", 00:24:39.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.581 "strip_size_kb": 64, 00:24:39.581 "state": "configuring", 00:24:39.581 "raid_level": "raid0", 00:24:39.581 "superblock": false, 00:24:39.581 "num_base_bdevs": 3, 00:24:39.581 "num_base_bdevs_discovered": 1, 00:24:39.581 "num_base_bdevs_operational": 3, 00:24:39.581 "base_bdevs_list": [ 00:24:39.581 { 00:24:39.581 "name": "BaseBdev1", 00:24:39.581 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:39.581 "is_configured": true, 00:24:39.581 "data_offset": 0, 00:24:39.581 "data_size": 65536 00:24:39.581 }, 00:24:39.581 { 00:24:39.581 "name": "BaseBdev2", 00:24:39.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.581 "is_configured": false, 00:24:39.581 "data_offset": 0, 00:24:39.581 "data_size": 0 00:24:39.581 }, 00:24:39.581 { 00:24:39.581 "name": "BaseBdev3", 00:24:39.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.581 "is_configured": false, 00:24:39.581 "data_offset": 0, 00:24:39.581 "data_size": 0 00:24:39.581 } 00:24:39.581 ] 00:24:39.581 }' 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.581 17:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.148 [2024-11-26 17:21:10.077345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:40.148 BaseBdev2 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.148 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.148 [ 00:24:40.148 { 00:24:40.148 "name": "BaseBdev2", 00:24:40.148 "aliases": [ 00:24:40.148 "cf69e930-8422-4953-94ae-b38239b3fb15" 00:24:40.148 ], 00:24:40.148 "product_name": "Malloc disk", 00:24:40.148 "block_size": 512, 00:24:40.148 "num_blocks": 65536, 00:24:40.148 "uuid": "cf69e930-8422-4953-94ae-b38239b3fb15", 00:24:40.148 "assigned_rate_limits": { 00:24:40.148 "rw_ios_per_sec": 0, 00:24:40.148 "rw_mbytes_per_sec": 0, 00:24:40.148 "r_mbytes_per_sec": 0, 00:24:40.148 "w_mbytes_per_sec": 0 00:24:40.148 }, 00:24:40.148 "claimed": true, 00:24:40.148 "claim_type": "exclusive_write", 00:24:40.148 "zoned": false, 00:24:40.148 "supported_io_types": { 00:24:40.148 "read": true, 00:24:40.148 "write": true, 00:24:40.148 "unmap": true, 00:24:40.148 "flush": true, 00:24:40.148 "reset": true, 00:24:40.148 "nvme_admin": false, 00:24:40.148 "nvme_io": false, 00:24:40.148 "nvme_io_md": false, 00:24:40.148 "write_zeroes": true, 00:24:40.148 "zcopy": true, 00:24:40.149 "get_zone_info": false, 00:24:40.149 "zone_management": false, 00:24:40.149 "zone_append": false, 00:24:40.149 "compare": false, 00:24:40.149 "compare_and_write": false, 00:24:40.149 "abort": true, 00:24:40.149 "seek_hole": false, 00:24:40.149 "seek_data": false, 00:24:40.149 "copy": true, 00:24:40.149 "nvme_iov_md": false 00:24:40.149 }, 00:24:40.149 "memory_domains": [ 00:24:40.149 { 00:24:40.149 "dma_device_id": "system", 00:24:40.149 "dma_device_type": 1 00:24:40.149 }, 00:24:40.149 { 00:24:40.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.149 "dma_device_type": 2 00:24:40.149 } 00:24:40.149 ], 00:24:40.149 "driver_specific": {} 00:24:40.149 } 00:24:40.149 ] 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.149 "name": "Existed_Raid", 00:24:40.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.149 "strip_size_kb": 64, 00:24:40.149 "state": "configuring", 00:24:40.149 "raid_level": "raid0", 00:24:40.149 "superblock": false, 00:24:40.149 "num_base_bdevs": 3, 00:24:40.149 "num_base_bdevs_discovered": 2, 00:24:40.149 "num_base_bdevs_operational": 3, 00:24:40.149 "base_bdevs_list": [ 00:24:40.149 { 00:24:40.149 "name": "BaseBdev1", 00:24:40.149 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:40.149 "is_configured": true, 00:24:40.149 "data_offset": 0, 00:24:40.149 "data_size": 65536 00:24:40.149 }, 00:24:40.149 { 00:24:40.149 "name": "BaseBdev2", 00:24:40.149 "uuid": "cf69e930-8422-4953-94ae-b38239b3fb15", 00:24:40.149 "is_configured": true, 00:24:40.149 "data_offset": 0, 00:24:40.149 "data_size": 65536 00:24:40.149 }, 00:24:40.149 { 00:24:40.149 "name": "BaseBdev3", 00:24:40.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.149 "is_configured": false, 00:24:40.149 "data_offset": 0, 00:24:40.149 "data_size": 0 00:24:40.149 } 00:24:40.149 ] 00:24:40.149 }' 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.149 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 [2024-11-26 17:21:10.602855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:40.717 [2024-11-26 17:21:10.602915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:40.717 [2024-11-26 17:21:10.602934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:40.717 [2024-11-26 17:21:10.603248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:40.717 [2024-11-26 17:21:10.603428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:40.717 [2024-11-26 17:21:10.603439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:40.717 [2024-11-26 17:21:10.603771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.717 BaseBdev3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 [ 00:24:40.717 { 00:24:40.717 "name": "BaseBdev3", 00:24:40.717 "aliases": [ 00:24:40.717 "6c17f06f-f521-4307-9c5f-3fc6115942a2" 00:24:40.717 ], 00:24:40.717 "product_name": "Malloc disk", 00:24:40.717 "block_size": 512, 00:24:40.717 "num_blocks": 65536, 00:24:40.717 "uuid": "6c17f06f-f521-4307-9c5f-3fc6115942a2", 00:24:40.717 "assigned_rate_limits": { 00:24:40.717 "rw_ios_per_sec": 0, 00:24:40.717 "rw_mbytes_per_sec": 0, 00:24:40.717 "r_mbytes_per_sec": 0, 00:24:40.717 "w_mbytes_per_sec": 0 00:24:40.717 }, 00:24:40.717 "claimed": true, 00:24:40.717 "claim_type": "exclusive_write", 00:24:40.717 "zoned": false, 00:24:40.717 "supported_io_types": { 00:24:40.717 "read": true, 00:24:40.717 "write": true, 00:24:40.717 "unmap": true, 00:24:40.717 "flush": true, 00:24:40.717 "reset": true, 00:24:40.717 "nvme_admin": false, 00:24:40.717 "nvme_io": false, 00:24:40.717 "nvme_io_md": false, 00:24:40.717 "write_zeroes": true, 00:24:40.717 "zcopy": true, 00:24:40.717 "get_zone_info": false, 00:24:40.717 "zone_management": false, 00:24:40.717 "zone_append": false, 00:24:40.717 "compare": false, 00:24:40.717 "compare_and_write": false, 00:24:40.717 "abort": true, 00:24:40.717 "seek_hole": false, 00:24:40.717 "seek_data": false, 00:24:40.717 "copy": true, 00:24:40.717 "nvme_iov_md": false 00:24:40.717 }, 00:24:40.717 "memory_domains": [ 00:24:40.717 { 00:24:40.717 "dma_device_id": "system", 00:24:40.717 "dma_device_type": 1 00:24:40.717 }, 00:24:40.717 { 00:24:40.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.717 "dma_device_type": 2 00:24:40.717 } 00:24:40.717 ], 00:24:40.717 "driver_specific": {} 00:24:40.717 } 00:24:40.717 ] 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.717 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.718 "name": "Existed_Raid", 00:24:40.718 "uuid": "821d3f14-8ce9-4cb8-b96e-a70ffec92d81", 00:24:40.718 "strip_size_kb": 64, 00:24:40.718 "state": "online", 00:24:40.718 "raid_level": "raid0", 00:24:40.718 "superblock": false, 00:24:40.718 "num_base_bdevs": 3, 00:24:40.718 "num_base_bdevs_discovered": 3, 00:24:40.718 "num_base_bdevs_operational": 3, 00:24:40.718 "base_bdevs_list": [ 00:24:40.718 { 00:24:40.718 "name": "BaseBdev1", 00:24:40.718 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:40.718 "is_configured": true, 00:24:40.718 "data_offset": 0, 00:24:40.718 "data_size": 65536 00:24:40.718 }, 00:24:40.718 { 00:24:40.718 "name": "BaseBdev2", 00:24:40.718 "uuid": "cf69e930-8422-4953-94ae-b38239b3fb15", 00:24:40.718 "is_configured": true, 00:24:40.718 "data_offset": 0, 00:24:40.718 "data_size": 65536 00:24:40.718 }, 00:24:40.718 { 00:24:40.718 "name": "BaseBdev3", 00:24:40.718 "uuid": "6c17f06f-f521-4307-9c5f-3fc6115942a2", 00:24:40.718 "is_configured": true, 00:24:40.718 "data_offset": 0, 00:24:40.718 "data_size": 65536 00:24:40.718 } 00:24:40.718 ] 00:24:40.718 }' 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.718 17:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.286 [2024-11-26 17:21:11.106613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.286 "name": "Existed_Raid", 00:24:41.286 "aliases": [ 00:24:41.286 "821d3f14-8ce9-4cb8-b96e-a70ffec92d81" 00:24:41.286 ], 00:24:41.286 "product_name": "Raid Volume", 00:24:41.286 "block_size": 512, 00:24:41.286 "num_blocks": 196608, 00:24:41.286 "uuid": "821d3f14-8ce9-4cb8-b96e-a70ffec92d81", 00:24:41.286 "assigned_rate_limits": { 00:24:41.286 "rw_ios_per_sec": 0, 00:24:41.286 "rw_mbytes_per_sec": 0, 00:24:41.286 "r_mbytes_per_sec": 0, 00:24:41.286 "w_mbytes_per_sec": 0 00:24:41.286 }, 00:24:41.286 "claimed": false, 00:24:41.286 "zoned": false, 00:24:41.286 "supported_io_types": { 00:24:41.286 "read": true, 00:24:41.286 "write": true, 00:24:41.286 "unmap": true, 00:24:41.286 "flush": true, 00:24:41.286 "reset": true, 00:24:41.286 "nvme_admin": false, 00:24:41.286 "nvme_io": false, 00:24:41.286 "nvme_io_md": false, 00:24:41.286 "write_zeroes": true, 00:24:41.286 "zcopy": false, 00:24:41.286 "get_zone_info": false, 00:24:41.286 "zone_management": false, 00:24:41.286 "zone_append": false, 00:24:41.286 "compare": false, 00:24:41.286 "compare_and_write": false, 00:24:41.286 "abort": false, 00:24:41.286 "seek_hole": false, 00:24:41.286 "seek_data": false, 00:24:41.286 "copy": false, 00:24:41.286 "nvme_iov_md": false 00:24:41.286 }, 00:24:41.286 "memory_domains": [ 00:24:41.286 { 00:24:41.286 "dma_device_id": "system", 00:24:41.286 "dma_device_type": 1 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.286 "dma_device_type": 2 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "dma_device_id": "system", 00:24:41.286 "dma_device_type": 1 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.286 "dma_device_type": 2 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "dma_device_id": "system", 00:24:41.286 "dma_device_type": 1 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.286 "dma_device_type": 2 00:24:41.286 } 00:24:41.286 ], 00:24:41.286 "driver_specific": { 00:24:41.286 "raid": { 00:24:41.286 "uuid": "821d3f14-8ce9-4cb8-b96e-a70ffec92d81", 00:24:41.286 "strip_size_kb": 64, 00:24:41.286 "state": "online", 00:24:41.286 "raid_level": "raid0", 00:24:41.286 "superblock": false, 00:24:41.286 "num_base_bdevs": 3, 00:24:41.286 "num_base_bdevs_discovered": 3, 00:24:41.286 "num_base_bdevs_operational": 3, 00:24:41.286 "base_bdevs_list": [ 00:24:41.286 { 00:24:41.286 "name": "BaseBdev1", 00:24:41.286 "uuid": "c422d48b-09a7-42e2-8dc8-5272802469aa", 00:24:41.286 "is_configured": true, 00:24:41.286 "data_offset": 0, 00:24:41.286 "data_size": 65536 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "name": "BaseBdev2", 00:24:41.286 "uuid": "cf69e930-8422-4953-94ae-b38239b3fb15", 00:24:41.286 "is_configured": true, 00:24:41.286 "data_offset": 0, 00:24:41.286 "data_size": 65536 00:24:41.286 }, 00:24:41.286 { 00:24:41.286 "name": "BaseBdev3", 00:24:41.286 "uuid": "6c17f06f-f521-4307-9c5f-3fc6115942a2", 00:24:41.286 "is_configured": true, 00:24:41.286 "data_offset": 0, 00:24:41.286 "data_size": 65536 00:24:41.286 } 00:24:41.286 ] 00:24:41.286 } 00:24:41.286 } 00:24:41.286 }' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:41.286 BaseBdev2 00:24:41.286 BaseBdev3' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.286 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.287 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.287 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.287 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:41.287 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.287 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.287 [2024-11-26 17:21:11.353997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:41.287 [2024-11-26 17:21:11.354032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.287 [2024-11-26 17:21:11.354095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.546 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.546 "name": "Existed_Raid", 00:24:41.546 "uuid": "821d3f14-8ce9-4cb8-b96e-a70ffec92d81", 00:24:41.546 "strip_size_kb": 64, 00:24:41.546 "state": "offline", 00:24:41.546 "raid_level": "raid0", 00:24:41.546 "superblock": false, 00:24:41.546 "num_base_bdevs": 3, 00:24:41.546 "num_base_bdevs_discovered": 2, 00:24:41.546 "num_base_bdevs_operational": 2, 00:24:41.546 "base_bdevs_list": [ 00:24:41.546 { 00:24:41.546 "name": null, 00:24:41.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.546 "is_configured": false, 00:24:41.546 "data_offset": 0, 00:24:41.546 "data_size": 65536 00:24:41.546 }, 00:24:41.546 { 00:24:41.546 "name": "BaseBdev2", 00:24:41.546 "uuid": "cf69e930-8422-4953-94ae-b38239b3fb15", 00:24:41.546 "is_configured": true, 00:24:41.546 "data_offset": 0, 00:24:41.546 "data_size": 65536 00:24:41.547 }, 00:24:41.547 { 00:24:41.547 "name": "BaseBdev3", 00:24:41.547 "uuid": "6c17f06f-f521-4307-9c5f-3fc6115942a2", 00:24:41.547 "is_configured": true, 00:24:41.547 "data_offset": 0, 00:24:41.547 "data_size": 65536 00:24:41.547 } 00:24:41.547 ] 00:24:41.547 }' 00:24:41.547 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.547 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.806 17:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.065 [2024-11-26 17:21:11.921811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.065 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.065 [2024-11-26 17:21:12.078398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:42.065 [2024-11-26 17:21:12.078462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 BaseBdev2 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 [ 00:24:42.326 { 00:24:42.326 "name": "BaseBdev2", 00:24:42.326 "aliases": [ 00:24:42.326 "2c96e008-1748-4576-a518-4b59784a6316" 00:24:42.326 ], 00:24:42.326 "product_name": "Malloc disk", 00:24:42.326 "block_size": 512, 00:24:42.326 "num_blocks": 65536, 00:24:42.326 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:42.326 "assigned_rate_limits": { 00:24:42.326 "rw_ios_per_sec": 0, 00:24:42.326 "rw_mbytes_per_sec": 0, 00:24:42.326 "r_mbytes_per_sec": 0, 00:24:42.326 "w_mbytes_per_sec": 0 00:24:42.326 }, 00:24:42.326 "claimed": false, 00:24:42.326 "zoned": false, 00:24:42.326 "supported_io_types": { 00:24:42.326 "read": true, 00:24:42.326 "write": true, 00:24:42.326 "unmap": true, 00:24:42.326 "flush": true, 00:24:42.326 "reset": true, 00:24:42.326 "nvme_admin": false, 00:24:42.326 "nvme_io": false, 00:24:42.326 "nvme_io_md": false, 00:24:42.326 "write_zeroes": true, 00:24:42.326 "zcopy": true, 00:24:42.326 "get_zone_info": false, 00:24:42.326 "zone_management": false, 00:24:42.326 "zone_append": false, 00:24:42.326 "compare": false, 00:24:42.326 "compare_and_write": false, 00:24:42.326 "abort": true, 00:24:42.326 "seek_hole": false, 00:24:42.326 "seek_data": false, 00:24:42.326 "copy": true, 00:24:42.326 "nvme_iov_md": false 00:24:42.326 }, 00:24:42.326 "memory_domains": [ 00:24:42.326 { 00:24:42.326 "dma_device_id": "system", 00:24:42.326 "dma_device_type": 1 00:24:42.326 }, 00:24:42.326 { 00:24:42.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.326 "dma_device_type": 2 00:24:42.326 } 00:24:42.326 ], 00:24:42.326 "driver_specific": {} 00:24:42.326 } 00:24:42.326 ] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 BaseBdev3 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.327 [ 00:24:42.327 { 00:24:42.327 "name": "BaseBdev3", 00:24:42.327 "aliases": [ 00:24:42.327 "af3c06bb-8713-4220-89e9-3b324d6b5401" 00:24:42.327 ], 00:24:42.327 "product_name": "Malloc disk", 00:24:42.327 "block_size": 512, 00:24:42.327 "num_blocks": 65536, 00:24:42.327 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:42.327 "assigned_rate_limits": { 00:24:42.327 "rw_ios_per_sec": 0, 00:24:42.327 "rw_mbytes_per_sec": 0, 00:24:42.327 "r_mbytes_per_sec": 0, 00:24:42.327 "w_mbytes_per_sec": 0 00:24:42.327 }, 00:24:42.327 "claimed": false, 00:24:42.327 "zoned": false, 00:24:42.327 "supported_io_types": { 00:24:42.327 "read": true, 00:24:42.327 "write": true, 00:24:42.327 "unmap": true, 00:24:42.327 "flush": true, 00:24:42.327 "reset": true, 00:24:42.327 "nvme_admin": false, 00:24:42.327 "nvme_io": false, 00:24:42.327 "nvme_io_md": false, 00:24:42.327 "write_zeroes": true, 00:24:42.327 "zcopy": true, 00:24:42.327 "get_zone_info": false, 00:24:42.327 "zone_management": false, 00:24:42.327 "zone_append": false, 00:24:42.327 "compare": false, 00:24:42.327 "compare_and_write": false, 00:24:42.327 "abort": true, 00:24:42.327 "seek_hole": false, 00:24:42.327 "seek_data": false, 00:24:42.327 "copy": true, 00:24:42.327 "nvme_iov_md": false 00:24:42.327 }, 00:24:42.327 "memory_domains": [ 00:24:42.327 { 00:24:42.327 "dma_device_id": "system", 00:24:42.327 "dma_device_type": 1 00:24:42.327 }, 00:24:42.327 { 00:24:42.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.327 "dma_device_type": 2 00:24:42.327 } 00:24:42.327 ], 00:24:42.327 "driver_specific": {} 00:24:42.327 } 00:24:42.327 ] 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.327 [2024-11-26 17:21:12.404467] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:42.327 [2024-11-26 17:21:12.404662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:42.327 [2024-11-26 17:21:12.404710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:42.327 [2024-11-26 17:21:12.407074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.327 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.587 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:42.587 "name": "Existed_Raid", 00:24:42.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.587 "strip_size_kb": 64, 00:24:42.587 "state": "configuring", 00:24:42.587 "raid_level": "raid0", 00:24:42.587 "superblock": false, 00:24:42.587 "num_base_bdevs": 3, 00:24:42.587 "num_base_bdevs_discovered": 2, 00:24:42.587 "num_base_bdevs_operational": 3, 00:24:42.587 "base_bdevs_list": [ 00:24:42.587 { 00:24:42.587 "name": "BaseBdev1", 00:24:42.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.587 "is_configured": false, 00:24:42.587 "data_offset": 0, 00:24:42.587 "data_size": 0 00:24:42.587 }, 00:24:42.587 { 00:24:42.587 "name": "BaseBdev2", 00:24:42.587 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:42.587 "is_configured": true, 00:24:42.587 "data_offset": 0, 00:24:42.587 "data_size": 65536 00:24:42.587 }, 00:24:42.587 { 00:24:42.587 "name": "BaseBdev3", 00:24:42.587 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:42.587 "is_configured": true, 00:24:42.587 "data_offset": 0, 00:24:42.587 "data_size": 65536 00:24:42.587 } 00:24:42.587 ] 00:24:42.587 }' 00:24:42.587 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:42.587 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.847 [2024-11-26 17:21:12.827892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:42.847 "name": "Existed_Raid", 00:24:42.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.847 "strip_size_kb": 64, 00:24:42.847 "state": "configuring", 00:24:42.847 "raid_level": "raid0", 00:24:42.847 "superblock": false, 00:24:42.847 "num_base_bdevs": 3, 00:24:42.847 "num_base_bdevs_discovered": 1, 00:24:42.847 "num_base_bdevs_operational": 3, 00:24:42.847 "base_bdevs_list": [ 00:24:42.847 { 00:24:42.847 "name": "BaseBdev1", 00:24:42.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.847 "is_configured": false, 00:24:42.847 "data_offset": 0, 00:24:42.847 "data_size": 0 00:24:42.847 }, 00:24:42.847 { 00:24:42.847 "name": null, 00:24:42.847 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:42.847 "is_configured": false, 00:24:42.847 "data_offset": 0, 00:24:42.847 "data_size": 65536 00:24:42.847 }, 00:24:42.847 { 00:24:42.847 "name": "BaseBdev3", 00:24:42.847 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:42.847 "is_configured": true, 00:24:42.847 "data_offset": 0, 00:24:42.847 "data_size": 65536 00:24:42.847 } 00:24:42.847 ] 00:24:42.847 }' 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:42.847 17:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 [2024-11-26 17:21:13.335430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.417 BaseBdev1 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 [ 00:24:43.417 { 00:24:43.417 "name": "BaseBdev1", 00:24:43.417 "aliases": [ 00:24:43.417 "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb" 00:24:43.417 ], 00:24:43.417 "product_name": "Malloc disk", 00:24:43.417 "block_size": 512, 00:24:43.417 "num_blocks": 65536, 00:24:43.417 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:43.417 "assigned_rate_limits": { 00:24:43.417 "rw_ios_per_sec": 0, 00:24:43.417 "rw_mbytes_per_sec": 0, 00:24:43.417 "r_mbytes_per_sec": 0, 00:24:43.417 "w_mbytes_per_sec": 0 00:24:43.417 }, 00:24:43.417 "claimed": true, 00:24:43.417 "claim_type": "exclusive_write", 00:24:43.417 "zoned": false, 00:24:43.417 "supported_io_types": { 00:24:43.417 "read": true, 00:24:43.417 "write": true, 00:24:43.417 "unmap": true, 00:24:43.417 "flush": true, 00:24:43.417 "reset": true, 00:24:43.417 "nvme_admin": false, 00:24:43.417 "nvme_io": false, 00:24:43.417 "nvme_io_md": false, 00:24:43.417 "write_zeroes": true, 00:24:43.417 "zcopy": true, 00:24:43.417 "get_zone_info": false, 00:24:43.417 "zone_management": false, 00:24:43.417 "zone_append": false, 00:24:43.417 "compare": false, 00:24:43.417 "compare_and_write": false, 00:24:43.417 "abort": true, 00:24:43.417 "seek_hole": false, 00:24:43.417 "seek_data": false, 00:24:43.417 "copy": true, 00:24:43.417 "nvme_iov_md": false 00:24:43.417 }, 00:24:43.417 "memory_domains": [ 00:24:43.417 { 00:24:43.417 "dma_device_id": "system", 00:24:43.417 "dma_device_type": 1 00:24:43.417 }, 00:24:43.417 { 00:24:43.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.417 "dma_device_type": 2 00:24:43.417 } 00:24:43.417 ], 00:24:43.417 "driver_specific": {} 00:24:43.417 } 00:24:43.417 ] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.417 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.417 "name": "Existed_Raid", 00:24:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.417 "strip_size_kb": 64, 00:24:43.417 "state": "configuring", 00:24:43.417 "raid_level": "raid0", 00:24:43.417 "superblock": false, 00:24:43.417 "num_base_bdevs": 3, 00:24:43.417 "num_base_bdevs_discovered": 2, 00:24:43.417 "num_base_bdevs_operational": 3, 00:24:43.417 "base_bdevs_list": [ 00:24:43.417 { 00:24:43.417 "name": "BaseBdev1", 00:24:43.417 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:43.417 "is_configured": true, 00:24:43.417 "data_offset": 0, 00:24:43.417 "data_size": 65536 00:24:43.417 }, 00:24:43.417 { 00:24:43.417 "name": null, 00:24:43.417 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:43.417 "is_configured": false, 00:24:43.417 "data_offset": 0, 00:24:43.417 "data_size": 65536 00:24:43.417 }, 00:24:43.417 { 00:24:43.417 "name": "BaseBdev3", 00:24:43.417 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:43.417 "is_configured": true, 00:24:43.418 "data_offset": 0, 00:24:43.418 "data_size": 65536 00:24:43.418 } 00:24:43.418 ] 00:24:43.418 }' 00:24:43.418 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.418 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 [2024-11-26 17:21:13.882705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.021 "name": "Existed_Raid", 00:24:44.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.021 "strip_size_kb": 64, 00:24:44.021 "state": "configuring", 00:24:44.021 "raid_level": "raid0", 00:24:44.021 "superblock": false, 00:24:44.021 "num_base_bdevs": 3, 00:24:44.021 "num_base_bdevs_discovered": 1, 00:24:44.021 "num_base_bdevs_operational": 3, 00:24:44.021 "base_bdevs_list": [ 00:24:44.021 { 00:24:44.021 "name": "BaseBdev1", 00:24:44.021 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:44.021 "is_configured": true, 00:24:44.021 "data_offset": 0, 00:24:44.021 "data_size": 65536 00:24:44.021 }, 00:24:44.021 { 00:24:44.021 "name": null, 00:24:44.021 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:44.021 "is_configured": false, 00:24:44.021 "data_offset": 0, 00:24:44.021 "data_size": 65536 00:24:44.021 }, 00:24:44.021 { 00:24:44.021 "name": null, 00:24:44.021 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:44.021 "is_configured": false, 00:24:44.021 "data_offset": 0, 00:24:44.021 "data_size": 65536 00:24:44.021 } 00:24:44.021 ] 00:24:44.021 }' 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.021 17:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 [2024-11-26 17:21:14.334232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.280 "name": "Existed_Raid", 00:24:44.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.280 "strip_size_kb": 64, 00:24:44.280 "state": "configuring", 00:24:44.280 "raid_level": "raid0", 00:24:44.280 "superblock": false, 00:24:44.280 "num_base_bdevs": 3, 00:24:44.280 "num_base_bdevs_discovered": 2, 00:24:44.280 "num_base_bdevs_operational": 3, 00:24:44.280 "base_bdevs_list": [ 00:24:44.280 { 00:24:44.280 "name": "BaseBdev1", 00:24:44.280 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:44.280 "is_configured": true, 00:24:44.280 "data_offset": 0, 00:24:44.280 "data_size": 65536 00:24:44.280 }, 00:24:44.280 { 00:24:44.280 "name": null, 00:24:44.280 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:44.280 "is_configured": false, 00:24:44.280 "data_offset": 0, 00:24:44.280 "data_size": 65536 00:24:44.280 }, 00:24:44.280 { 00:24:44.280 "name": "BaseBdev3", 00:24:44.280 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:44.280 "is_configured": true, 00:24:44.280 "data_offset": 0, 00:24:44.280 "data_size": 65536 00:24:44.280 } 00:24:44.280 ] 00:24:44.280 }' 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.280 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.849 [2024-11-26 17:21:14.781705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.849 "name": "Existed_Raid", 00:24:44.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.849 "strip_size_kb": 64, 00:24:44.849 "state": "configuring", 00:24:44.849 "raid_level": "raid0", 00:24:44.849 "superblock": false, 00:24:44.849 "num_base_bdevs": 3, 00:24:44.849 "num_base_bdevs_discovered": 1, 00:24:44.849 "num_base_bdevs_operational": 3, 00:24:44.849 "base_bdevs_list": [ 00:24:44.849 { 00:24:44.849 "name": null, 00:24:44.849 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:44.849 "is_configured": false, 00:24:44.849 "data_offset": 0, 00:24:44.849 "data_size": 65536 00:24:44.849 }, 00:24:44.849 { 00:24:44.849 "name": null, 00:24:44.849 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:44.849 "is_configured": false, 00:24:44.849 "data_offset": 0, 00:24:44.849 "data_size": 65536 00:24:44.849 }, 00:24:44.849 { 00:24:44.849 "name": "BaseBdev3", 00:24:44.849 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:44.849 "is_configured": true, 00:24:44.849 "data_offset": 0, 00:24:44.849 "data_size": 65536 00:24:44.849 } 00:24:44.849 ] 00:24:44.849 }' 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.849 17:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.418 [2024-11-26 17:21:15.350790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.418 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.418 "name": "Existed_Raid", 00:24:45.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.418 "strip_size_kb": 64, 00:24:45.418 "state": "configuring", 00:24:45.418 "raid_level": "raid0", 00:24:45.418 "superblock": false, 00:24:45.418 "num_base_bdevs": 3, 00:24:45.418 "num_base_bdevs_discovered": 2, 00:24:45.418 "num_base_bdevs_operational": 3, 00:24:45.418 "base_bdevs_list": [ 00:24:45.418 { 00:24:45.418 "name": null, 00:24:45.418 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:45.418 "is_configured": false, 00:24:45.418 "data_offset": 0, 00:24:45.418 "data_size": 65536 00:24:45.418 }, 00:24:45.418 { 00:24:45.418 "name": "BaseBdev2", 00:24:45.418 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:45.418 "is_configured": true, 00:24:45.418 "data_offset": 0, 00:24:45.418 "data_size": 65536 00:24:45.418 }, 00:24:45.418 { 00:24:45.418 "name": "BaseBdev3", 00:24:45.418 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:45.418 "is_configured": true, 00:24:45.419 "data_offset": 0, 00:24:45.419 "data_size": 65536 00:24:45.419 } 00:24:45.419 ] 00:24:45.419 }' 00:24:45.419 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.419 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.677 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.677 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.677 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.677 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:45.677 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.935 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.936 [2024-11-26 17:21:15.896163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:45.936 [2024-11-26 17:21:15.896364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:45.936 [2024-11-26 17:21:15.896392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:45.936 [2024-11-26 17:21:15.896706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:45.936 [2024-11-26 17:21:15.896877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:45.936 [2024-11-26 17:21:15.896888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:45.936 [2024-11-26 17:21:15.897152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.936 NewBaseBdev 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.936 [ 00:24:45.936 { 00:24:45.936 "name": "NewBaseBdev", 00:24:45.936 "aliases": [ 00:24:45.936 "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb" 00:24:45.936 ], 00:24:45.936 "product_name": "Malloc disk", 00:24:45.936 "block_size": 512, 00:24:45.936 "num_blocks": 65536, 00:24:45.936 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:45.936 "assigned_rate_limits": { 00:24:45.936 "rw_ios_per_sec": 0, 00:24:45.936 "rw_mbytes_per_sec": 0, 00:24:45.936 "r_mbytes_per_sec": 0, 00:24:45.936 "w_mbytes_per_sec": 0 00:24:45.936 }, 00:24:45.936 "claimed": true, 00:24:45.936 "claim_type": "exclusive_write", 00:24:45.936 "zoned": false, 00:24:45.936 "supported_io_types": { 00:24:45.936 "read": true, 00:24:45.936 "write": true, 00:24:45.936 "unmap": true, 00:24:45.936 "flush": true, 00:24:45.936 "reset": true, 00:24:45.936 "nvme_admin": false, 00:24:45.936 "nvme_io": false, 00:24:45.936 "nvme_io_md": false, 00:24:45.936 "write_zeroes": true, 00:24:45.936 "zcopy": true, 00:24:45.936 "get_zone_info": false, 00:24:45.936 "zone_management": false, 00:24:45.936 "zone_append": false, 00:24:45.936 "compare": false, 00:24:45.936 "compare_and_write": false, 00:24:45.936 "abort": true, 00:24:45.936 "seek_hole": false, 00:24:45.936 "seek_data": false, 00:24:45.936 "copy": true, 00:24:45.936 "nvme_iov_md": false 00:24:45.936 }, 00:24:45.936 "memory_domains": [ 00:24:45.936 { 00:24:45.936 "dma_device_id": "system", 00:24:45.936 "dma_device_type": 1 00:24:45.936 }, 00:24:45.936 { 00:24:45.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.936 "dma_device_type": 2 00:24:45.936 } 00:24:45.936 ], 00:24:45.936 "driver_specific": {} 00:24:45.936 } 00:24:45.936 ] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.936 "name": "Existed_Raid", 00:24:45.936 "uuid": "c565798a-fc01-4746-947e-8563fba9cfa1", 00:24:45.936 "strip_size_kb": 64, 00:24:45.936 "state": "online", 00:24:45.936 "raid_level": "raid0", 00:24:45.936 "superblock": false, 00:24:45.936 "num_base_bdevs": 3, 00:24:45.936 "num_base_bdevs_discovered": 3, 00:24:45.936 "num_base_bdevs_operational": 3, 00:24:45.936 "base_bdevs_list": [ 00:24:45.936 { 00:24:45.936 "name": "NewBaseBdev", 00:24:45.936 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:45.936 "is_configured": true, 00:24:45.936 "data_offset": 0, 00:24:45.936 "data_size": 65536 00:24:45.936 }, 00:24:45.936 { 00:24:45.936 "name": "BaseBdev2", 00:24:45.936 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:45.936 "is_configured": true, 00:24:45.936 "data_offset": 0, 00:24:45.936 "data_size": 65536 00:24:45.936 }, 00:24:45.936 { 00:24:45.936 "name": "BaseBdev3", 00:24:45.936 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:45.936 "is_configured": true, 00:24:45.936 "data_offset": 0, 00:24:45.936 "data_size": 65536 00:24:45.936 } 00:24:45.936 ] 00:24:45.936 }' 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.936 17:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:46.505 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.506 [2024-11-26 17:21:16.375885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.506 "name": "Existed_Raid", 00:24:46.506 "aliases": [ 00:24:46.506 "c565798a-fc01-4746-947e-8563fba9cfa1" 00:24:46.506 ], 00:24:46.506 "product_name": "Raid Volume", 00:24:46.506 "block_size": 512, 00:24:46.506 "num_blocks": 196608, 00:24:46.506 "uuid": "c565798a-fc01-4746-947e-8563fba9cfa1", 00:24:46.506 "assigned_rate_limits": { 00:24:46.506 "rw_ios_per_sec": 0, 00:24:46.506 "rw_mbytes_per_sec": 0, 00:24:46.506 "r_mbytes_per_sec": 0, 00:24:46.506 "w_mbytes_per_sec": 0 00:24:46.506 }, 00:24:46.506 "claimed": false, 00:24:46.506 "zoned": false, 00:24:46.506 "supported_io_types": { 00:24:46.506 "read": true, 00:24:46.506 "write": true, 00:24:46.506 "unmap": true, 00:24:46.506 "flush": true, 00:24:46.506 "reset": true, 00:24:46.506 "nvme_admin": false, 00:24:46.506 "nvme_io": false, 00:24:46.506 "nvme_io_md": false, 00:24:46.506 "write_zeroes": true, 00:24:46.506 "zcopy": false, 00:24:46.506 "get_zone_info": false, 00:24:46.506 "zone_management": false, 00:24:46.506 "zone_append": false, 00:24:46.506 "compare": false, 00:24:46.506 "compare_and_write": false, 00:24:46.506 "abort": false, 00:24:46.506 "seek_hole": false, 00:24:46.506 "seek_data": false, 00:24:46.506 "copy": false, 00:24:46.506 "nvme_iov_md": false 00:24:46.506 }, 00:24:46.506 "memory_domains": [ 00:24:46.506 { 00:24:46.506 "dma_device_id": "system", 00:24:46.506 "dma_device_type": 1 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.506 "dma_device_type": 2 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "dma_device_id": "system", 00:24:46.506 "dma_device_type": 1 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.506 "dma_device_type": 2 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "dma_device_id": "system", 00:24:46.506 "dma_device_type": 1 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.506 "dma_device_type": 2 00:24:46.506 } 00:24:46.506 ], 00:24:46.506 "driver_specific": { 00:24:46.506 "raid": { 00:24:46.506 "uuid": "c565798a-fc01-4746-947e-8563fba9cfa1", 00:24:46.506 "strip_size_kb": 64, 00:24:46.506 "state": "online", 00:24:46.506 "raid_level": "raid0", 00:24:46.506 "superblock": false, 00:24:46.506 "num_base_bdevs": 3, 00:24:46.506 "num_base_bdevs_discovered": 3, 00:24:46.506 "num_base_bdevs_operational": 3, 00:24:46.506 "base_bdevs_list": [ 00:24:46.506 { 00:24:46.506 "name": "NewBaseBdev", 00:24:46.506 "uuid": "43babe0c-1bf1-4b87-abe2-ae57ed4b9cbb", 00:24:46.506 "is_configured": true, 00:24:46.506 "data_offset": 0, 00:24:46.506 "data_size": 65536 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "name": "BaseBdev2", 00:24:46.506 "uuid": "2c96e008-1748-4576-a518-4b59784a6316", 00:24:46.506 "is_configured": true, 00:24:46.506 "data_offset": 0, 00:24:46.506 "data_size": 65536 00:24:46.506 }, 00:24:46.506 { 00:24:46.506 "name": "BaseBdev3", 00:24:46.506 "uuid": "af3c06bb-8713-4220-89e9-3b324d6b5401", 00:24:46.506 "is_configured": true, 00:24:46.506 "data_offset": 0, 00:24:46.506 "data_size": 65536 00:24:46.506 } 00:24:46.506 ] 00:24:46.506 } 00:24:46.506 } 00:24:46.506 }' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:46.506 BaseBdev2 00:24:46.506 BaseBdev3' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.506 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.767 [2024-11-26 17:21:16.651171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:46.767 [2024-11-26 17:21:16.651212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.767 [2024-11-26 17:21:16.651310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.767 [2024-11-26 17:21:16.651368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.767 [2024-11-26 17:21:16.651384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63920 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63920 ']' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63920 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63920 00:24:46.767 killing process with pid 63920 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63920' 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63920 00:24:46.767 [2024-11-26 17:21:16.702638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.767 17:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63920 00:24:47.026 [2024-11-26 17:21:17.016956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:48.404 00:24:48.404 real 0m10.546s 00:24:48.404 user 0m16.566s 00:24:48.404 sys 0m2.170s 00:24:48.404 ************************************ 00:24:48.404 END TEST raid_state_function_test 00:24:48.404 ************************************ 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.404 17:21:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:24:48.404 17:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:48.404 17:21:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.404 17:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:48.404 ************************************ 00:24:48.404 START TEST raid_state_function_test_sb 00:24:48.404 ************************************ 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64541 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64541' 00:24:48.404 Process raid pid: 64541 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64541 00:24:48.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64541 ']' 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.404 17:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.404 [2024-11-26 17:21:18.394653] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:48.404 [2024-11-26 17:21:18.394936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.663 [2024-11-26 17:21:18.581992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.663 [2024-11-26 17:21:18.742704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.923 [2024-11-26 17:21:18.981074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.923 [2024-11-26 17:21:18.981402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.182 [2024-11-26 17:21:19.245954] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:49.182 [2024-11-26 17:21:19.246050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:49.182 [2024-11-26 17:21:19.246065] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.182 [2024-11-26 17:21:19.246081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.182 [2024-11-26 17:21:19.246091] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.182 [2024-11-26 17:21:19.246106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.182 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.440 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.440 "name": "Existed_Raid", 00:24:49.440 "uuid": "af9de11a-0a20-4663-842a-b989ea4e6a17", 00:24:49.440 "strip_size_kb": 64, 00:24:49.440 "state": "configuring", 00:24:49.440 "raid_level": "raid0", 00:24:49.440 "superblock": true, 00:24:49.440 "num_base_bdevs": 3, 00:24:49.440 "num_base_bdevs_discovered": 0, 00:24:49.440 "num_base_bdevs_operational": 3, 00:24:49.440 "base_bdevs_list": [ 00:24:49.440 { 00:24:49.440 "name": "BaseBdev1", 00:24:49.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.440 "is_configured": false, 00:24:49.440 "data_offset": 0, 00:24:49.440 "data_size": 0 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "name": "BaseBdev2", 00:24:49.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.440 "is_configured": false, 00:24:49.440 "data_offset": 0, 00:24:49.440 "data_size": 0 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "name": "BaseBdev3", 00:24:49.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.440 "is_configured": false, 00:24:49.440 "data_offset": 0, 00:24:49.440 "data_size": 0 00:24:49.440 } 00:24:49.440 ] 00:24:49.440 }' 00:24:49.440 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.440 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.699 [2024-11-26 17:21:19.661797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:49.699 [2024-11-26 17:21:19.661875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.699 [2024-11-26 17:21:19.673802] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:49.699 [2024-11-26 17:21:19.673896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:49.699 [2024-11-26 17:21:19.673909] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.699 [2024-11-26 17:21:19.673926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.699 [2024-11-26 17:21:19.673935] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.699 [2024-11-26 17:21:19.673950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.699 [2024-11-26 17:21:19.730573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.699 BaseBdev1 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:49.699 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.700 [ 00:24:49.700 { 00:24:49.700 "name": "BaseBdev1", 00:24:49.700 "aliases": [ 00:24:49.700 "e683d027-ba43-45cb-b96a-0f403b4bbfb4" 00:24:49.700 ], 00:24:49.700 "product_name": "Malloc disk", 00:24:49.700 "block_size": 512, 00:24:49.700 "num_blocks": 65536, 00:24:49.700 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:49.700 "assigned_rate_limits": { 00:24:49.700 "rw_ios_per_sec": 0, 00:24:49.700 "rw_mbytes_per_sec": 0, 00:24:49.700 "r_mbytes_per_sec": 0, 00:24:49.700 "w_mbytes_per_sec": 0 00:24:49.700 }, 00:24:49.700 "claimed": true, 00:24:49.700 "claim_type": "exclusive_write", 00:24:49.700 "zoned": false, 00:24:49.700 "supported_io_types": { 00:24:49.700 "read": true, 00:24:49.700 "write": true, 00:24:49.700 "unmap": true, 00:24:49.700 "flush": true, 00:24:49.700 "reset": true, 00:24:49.700 "nvme_admin": false, 00:24:49.700 "nvme_io": false, 00:24:49.700 "nvme_io_md": false, 00:24:49.700 "write_zeroes": true, 00:24:49.700 "zcopy": true, 00:24:49.700 "get_zone_info": false, 00:24:49.700 "zone_management": false, 00:24:49.700 "zone_append": false, 00:24:49.700 "compare": false, 00:24:49.700 "compare_and_write": false, 00:24:49.700 "abort": true, 00:24:49.700 "seek_hole": false, 00:24:49.700 "seek_data": false, 00:24:49.700 "copy": true, 00:24:49.700 "nvme_iov_md": false 00:24:49.700 }, 00:24:49.700 "memory_domains": [ 00:24:49.700 { 00:24:49.700 "dma_device_id": "system", 00:24:49.700 "dma_device_type": 1 00:24:49.700 }, 00:24:49.700 { 00:24:49.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.700 "dma_device_type": 2 00:24:49.700 } 00:24:49.700 ], 00:24:49.700 "driver_specific": {} 00:24:49.700 } 00:24:49.700 ] 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.700 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.958 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.958 "name": "Existed_Raid", 00:24:49.958 "uuid": "b01afd94-4640-4530-b2b5-0c2c280709bb", 00:24:49.958 "strip_size_kb": 64, 00:24:49.958 "state": "configuring", 00:24:49.958 "raid_level": "raid0", 00:24:49.958 "superblock": true, 00:24:49.958 "num_base_bdevs": 3, 00:24:49.958 "num_base_bdevs_discovered": 1, 00:24:49.958 "num_base_bdevs_operational": 3, 00:24:49.958 "base_bdevs_list": [ 00:24:49.958 { 00:24:49.958 "name": "BaseBdev1", 00:24:49.958 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:49.958 "is_configured": true, 00:24:49.958 "data_offset": 2048, 00:24:49.959 "data_size": 63488 00:24:49.959 }, 00:24:49.959 { 00:24:49.959 "name": "BaseBdev2", 00:24:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.959 "is_configured": false, 00:24:49.959 "data_offset": 0, 00:24:49.959 "data_size": 0 00:24:49.959 }, 00:24:49.959 { 00:24:49.959 "name": "BaseBdev3", 00:24:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.959 "is_configured": false, 00:24:49.959 "data_offset": 0, 00:24:49.959 "data_size": 0 00:24:49.959 } 00:24:49.959 ] 00:24:49.959 }' 00:24:49.959 17:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.959 17:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 [2024-11-26 17:21:20.197989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:50.218 [2024-11-26 17:21:20.198313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 [2024-11-26 17:21:20.210063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:50.218 [2024-11-26 17:21:20.212467] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:50.218 [2024-11-26 17:21:20.212546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:50.218 [2024-11-26 17:21:20.212561] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:50.218 [2024-11-26 17:21:20.212577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.218 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.218 "name": "Existed_Raid", 00:24:50.218 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:50.218 "strip_size_kb": 64, 00:24:50.218 "state": "configuring", 00:24:50.218 "raid_level": "raid0", 00:24:50.218 "superblock": true, 00:24:50.218 "num_base_bdevs": 3, 00:24:50.218 "num_base_bdevs_discovered": 1, 00:24:50.218 "num_base_bdevs_operational": 3, 00:24:50.218 "base_bdevs_list": [ 00:24:50.218 { 00:24:50.218 "name": "BaseBdev1", 00:24:50.218 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:50.218 "is_configured": true, 00:24:50.218 "data_offset": 2048, 00:24:50.218 "data_size": 63488 00:24:50.218 }, 00:24:50.218 { 00:24:50.218 "name": "BaseBdev2", 00:24:50.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.218 "is_configured": false, 00:24:50.218 "data_offset": 0, 00:24:50.218 "data_size": 0 00:24:50.218 }, 00:24:50.218 { 00:24:50.219 "name": "BaseBdev3", 00:24:50.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.219 "is_configured": false, 00:24:50.219 "data_offset": 0, 00:24:50.219 "data_size": 0 00:24:50.219 } 00:24:50.219 ] 00:24:50.219 }' 00:24:50.219 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.219 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.787 [2024-11-26 17:21:20.663160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.787 BaseBdev2 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.787 [ 00:24:50.787 { 00:24:50.787 "name": "BaseBdev2", 00:24:50.787 "aliases": [ 00:24:50.787 "5da41f50-966c-4dce-b647-56f4974e6bf0" 00:24:50.787 ], 00:24:50.787 "product_name": "Malloc disk", 00:24:50.787 "block_size": 512, 00:24:50.787 "num_blocks": 65536, 00:24:50.787 "uuid": "5da41f50-966c-4dce-b647-56f4974e6bf0", 00:24:50.787 "assigned_rate_limits": { 00:24:50.787 "rw_ios_per_sec": 0, 00:24:50.787 "rw_mbytes_per_sec": 0, 00:24:50.787 "r_mbytes_per_sec": 0, 00:24:50.787 "w_mbytes_per_sec": 0 00:24:50.787 }, 00:24:50.787 "claimed": true, 00:24:50.787 "claim_type": "exclusive_write", 00:24:50.787 "zoned": false, 00:24:50.787 "supported_io_types": { 00:24:50.787 "read": true, 00:24:50.787 "write": true, 00:24:50.787 "unmap": true, 00:24:50.787 "flush": true, 00:24:50.787 "reset": true, 00:24:50.787 "nvme_admin": false, 00:24:50.787 "nvme_io": false, 00:24:50.787 "nvme_io_md": false, 00:24:50.787 "write_zeroes": true, 00:24:50.787 "zcopy": true, 00:24:50.787 "get_zone_info": false, 00:24:50.787 "zone_management": false, 00:24:50.787 "zone_append": false, 00:24:50.787 "compare": false, 00:24:50.787 "compare_and_write": false, 00:24:50.787 "abort": true, 00:24:50.787 "seek_hole": false, 00:24:50.787 "seek_data": false, 00:24:50.787 "copy": true, 00:24:50.787 "nvme_iov_md": false 00:24:50.787 }, 00:24:50.787 "memory_domains": [ 00:24:50.787 { 00:24:50.787 "dma_device_id": "system", 00:24:50.787 "dma_device_type": 1 00:24:50.787 }, 00:24:50.787 { 00:24:50.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.787 "dma_device_type": 2 00:24:50.787 } 00:24:50.787 ], 00:24:50.787 "driver_specific": {} 00:24:50.787 } 00:24:50.787 ] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.787 "name": "Existed_Raid", 00:24:50.787 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:50.787 "strip_size_kb": 64, 00:24:50.787 "state": "configuring", 00:24:50.787 "raid_level": "raid0", 00:24:50.787 "superblock": true, 00:24:50.787 "num_base_bdevs": 3, 00:24:50.787 "num_base_bdevs_discovered": 2, 00:24:50.787 "num_base_bdevs_operational": 3, 00:24:50.787 "base_bdevs_list": [ 00:24:50.787 { 00:24:50.787 "name": "BaseBdev1", 00:24:50.787 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:50.787 "is_configured": true, 00:24:50.787 "data_offset": 2048, 00:24:50.787 "data_size": 63488 00:24:50.787 }, 00:24:50.787 { 00:24:50.787 "name": "BaseBdev2", 00:24:50.787 "uuid": "5da41f50-966c-4dce-b647-56f4974e6bf0", 00:24:50.787 "is_configured": true, 00:24:50.787 "data_offset": 2048, 00:24:50.787 "data_size": 63488 00:24:50.787 }, 00:24:50.787 { 00:24:50.787 "name": "BaseBdev3", 00:24:50.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.787 "is_configured": false, 00:24:50.787 "data_offset": 0, 00:24:50.787 "data_size": 0 00:24:50.787 } 00:24:50.787 ] 00:24:50.787 }' 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.787 17:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.046 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:51.046 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.046 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.305 [2024-11-26 17:21:21.177817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:51.305 [2024-11-26 17:21:21.178374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:51.305 [2024-11-26 17:21:21.178410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:51.305 [2024-11-26 17:21:21.178763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:51.305 BaseBdev3 00:24:51.305 [2024-11-26 17:21:21.178941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:51.305 [2024-11-26 17:21:21.178954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:51.305 [2024-11-26 17:21:21.179109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.305 [ 00:24:51.305 { 00:24:51.305 "name": "BaseBdev3", 00:24:51.305 "aliases": [ 00:24:51.305 "188c02b8-3e86-4fda-bd31-eb557fa8d30e" 00:24:51.305 ], 00:24:51.305 "product_name": "Malloc disk", 00:24:51.305 "block_size": 512, 00:24:51.305 "num_blocks": 65536, 00:24:51.305 "uuid": "188c02b8-3e86-4fda-bd31-eb557fa8d30e", 00:24:51.305 "assigned_rate_limits": { 00:24:51.305 "rw_ios_per_sec": 0, 00:24:51.305 "rw_mbytes_per_sec": 0, 00:24:51.305 "r_mbytes_per_sec": 0, 00:24:51.305 "w_mbytes_per_sec": 0 00:24:51.305 }, 00:24:51.305 "claimed": true, 00:24:51.305 "claim_type": "exclusive_write", 00:24:51.305 "zoned": false, 00:24:51.305 "supported_io_types": { 00:24:51.305 "read": true, 00:24:51.305 "write": true, 00:24:51.305 "unmap": true, 00:24:51.305 "flush": true, 00:24:51.305 "reset": true, 00:24:51.305 "nvme_admin": false, 00:24:51.305 "nvme_io": false, 00:24:51.305 "nvme_io_md": false, 00:24:51.305 "write_zeroes": true, 00:24:51.305 "zcopy": true, 00:24:51.305 "get_zone_info": false, 00:24:51.305 "zone_management": false, 00:24:51.305 "zone_append": false, 00:24:51.305 "compare": false, 00:24:51.305 "compare_and_write": false, 00:24:51.305 "abort": true, 00:24:51.305 "seek_hole": false, 00:24:51.305 "seek_data": false, 00:24:51.305 "copy": true, 00:24:51.305 "nvme_iov_md": false 00:24:51.305 }, 00:24:51.305 "memory_domains": [ 00:24:51.305 { 00:24:51.305 "dma_device_id": "system", 00:24:51.305 "dma_device_type": 1 00:24:51.305 }, 00:24:51.305 { 00:24:51.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.305 "dma_device_type": 2 00:24:51.305 } 00:24:51.305 ], 00:24:51.305 "driver_specific": {} 00:24:51.305 } 00:24:51.305 ] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.305 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.305 "name": "Existed_Raid", 00:24:51.305 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:51.305 "strip_size_kb": 64, 00:24:51.305 "state": "online", 00:24:51.305 "raid_level": "raid0", 00:24:51.305 "superblock": true, 00:24:51.305 "num_base_bdevs": 3, 00:24:51.305 "num_base_bdevs_discovered": 3, 00:24:51.305 "num_base_bdevs_operational": 3, 00:24:51.305 "base_bdevs_list": [ 00:24:51.306 { 00:24:51.306 "name": "BaseBdev1", 00:24:51.306 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:51.306 "is_configured": true, 00:24:51.306 "data_offset": 2048, 00:24:51.306 "data_size": 63488 00:24:51.306 }, 00:24:51.306 { 00:24:51.306 "name": "BaseBdev2", 00:24:51.306 "uuid": "5da41f50-966c-4dce-b647-56f4974e6bf0", 00:24:51.306 "is_configured": true, 00:24:51.306 "data_offset": 2048, 00:24:51.306 "data_size": 63488 00:24:51.306 }, 00:24:51.306 { 00:24:51.306 "name": "BaseBdev3", 00:24:51.306 "uuid": "188c02b8-3e86-4fda-bd31-eb557fa8d30e", 00:24:51.306 "is_configured": true, 00:24:51.306 "data_offset": 2048, 00:24:51.306 "data_size": 63488 00:24:51.306 } 00:24:51.306 ] 00:24:51.306 }' 00:24:51.306 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.306 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.565 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 [2024-11-26 17:21:21.666056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.823 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.823 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.823 "name": "Existed_Raid", 00:24:51.823 "aliases": [ 00:24:51.823 "8af6a5d3-dc15-4513-b7e0-e23279093dd5" 00:24:51.823 ], 00:24:51.823 "product_name": "Raid Volume", 00:24:51.823 "block_size": 512, 00:24:51.823 "num_blocks": 190464, 00:24:51.823 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:51.823 "assigned_rate_limits": { 00:24:51.823 "rw_ios_per_sec": 0, 00:24:51.823 "rw_mbytes_per_sec": 0, 00:24:51.823 "r_mbytes_per_sec": 0, 00:24:51.823 "w_mbytes_per_sec": 0 00:24:51.823 }, 00:24:51.823 "claimed": false, 00:24:51.823 "zoned": false, 00:24:51.823 "supported_io_types": { 00:24:51.823 "read": true, 00:24:51.823 "write": true, 00:24:51.824 "unmap": true, 00:24:51.824 "flush": true, 00:24:51.824 "reset": true, 00:24:51.824 "nvme_admin": false, 00:24:51.824 "nvme_io": false, 00:24:51.824 "nvme_io_md": false, 00:24:51.824 "write_zeroes": true, 00:24:51.824 "zcopy": false, 00:24:51.824 "get_zone_info": false, 00:24:51.824 "zone_management": false, 00:24:51.824 "zone_append": false, 00:24:51.824 "compare": false, 00:24:51.824 "compare_and_write": false, 00:24:51.824 "abort": false, 00:24:51.824 "seek_hole": false, 00:24:51.824 "seek_data": false, 00:24:51.824 "copy": false, 00:24:51.824 "nvme_iov_md": false 00:24:51.824 }, 00:24:51.824 "memory_domains": [ 00:24:51.824 { 00:24:51.824 "dma_device_id": "system", 00:24:51.824 "dma_device_type": 1 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.824 "dma_device_type": 2 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "dma_device_id": "system", 00:24:51.824 "dma_device_type": 1 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.824 "dma_device_type": 2 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "dma_device_id": "system", 00:24:51.824 "dma_device_type": 1 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.824 "dma_device_type": 2 00:24:51.824 } 00:24:51.824 ], 00:24:51.824 "driver_specific": { 00:24:51.824 "raid": { 00:24:51.824 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:51.824 "strip_size_kb": 64, 00:24:51.824 "state": "online", 00:24:51.824 "raid_level": "raid0", 00:24:51.824 "superblock": true, 00:24:51.824 "num_base_bdevs": 3, 00:24:51.824 "num_base_bdevs_discovered": 3, 00:24:51.824 "num_base_bdevs_operational": 3, 00:24:51.824 "base_bdevs_list": [ 00:24:51.824 { 00:24:51.824 "name": "BaseBdev1", 00:24:51.824 "uuid": "e683d027-ba43-45cb-b96a-0f403b4bbfb4", 00:24:51.824 "is_configured": true, 00:24:51.824 "data_offset": 2048, 00:24:51.824 "data_size": 63488 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "name": "BaseBdev2", 00:24:51.824 "uuid": "5da41f50-966c-4dce-b647-56f4974e6bf0", 00:24:51.824 "is_configured": true, 00:24:51.824 "data_offset": 2048, 00:24:51.824 "data_size": 63488 00:24:51.824 }, 00:24:51.824 { 00:24:51.824 "name": "BaseBdev3", 00:24:51.824 "uuid": "188c02b8-3e86-4fda-bd31-eb557fa8d30e", 00:24:51.824 "is_configured": true, 00:24:51.824 "data_offset": 2048, 00:24:51.824 "data_size": 63488 00:24:51.824 } 00:24:51.824 ] 00:24:51.824 } 00:24:51.824 } 00:24:51.824 }' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:51.824 BaseBdev2 00:24:51.824 BaseBdev3' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.824 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.082 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.083 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.083 17:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:52.083 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.083 17:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.083 [2024-11-26 17:21:21.945721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:52.083 [2024-11-26 17:21:21.945800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.083 [2024-11-26 17:21:21.945872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.083 "name": "Existed_Raid", 00:24:52.083 "uuid": "8af6a5d3-dc15-4513-b7e0-e23279093dd5", 00:24:52.083 "strip_size_kb": 64, 00:24:52.083 "state": "offline", 00:24:52.083 "raid_level": "raid0", 00:24:52.083 "superblock": true, 00:24:52.083 "num_base_bdevs": 3, 00:24:52.083 "num_base_bdevs_discovered": 2, 00:24:52.083 "num_base_bdevs_operational": 2, 00:24:52.083 "base_bdevs_list": [ 00:24:52.083 { 00:24:52.083 "name": null, 00:24:52.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.083 "is_configured": false, 00:24:52.083 "data_offset": 0, 00:24:52.083 "data_size": 63488 00:24:52.083 }, 00:24:52.083 { 00:24:52.083 "name": "BaseBdev2", 00:24:52.083 "uuid": "5da41f50-966c-4dce-b647-56f4974e6bf0", 00:24:52.083 "is_configured": true, 00:24:52.083 "data_offset": 2048, 00:24:52.083 "data_size": 63488 00:24:52.083 }, 00:24:52.083 { 00:24:52.083 "name": "BaseBdev3", 00:24:52.083 "uuid": "188c02b8-3e86-4fda-bd31-eb557fa8d30e", 00:24:52.083 "is_configured": true, 00:24:52.083 "data_offset": 2048, 00:24:52.083 "data_size": 63488 00:24:52.083 } 00:24:52.083 ] 00:24:52.083 }' 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.083 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.342 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:52.342 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.601 [2024-11-26 17:21:22.503796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.601 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.601 [2024-11-26 17:21:22.655819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:52.601 [2024-11-26 17:21:22.656138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 BaseBdev2 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 [ 00:24:52.860 { 00:24:52.860 "name": "BaseBdev2", 00:24:52.860 "aliases": [ 00:24:52.860 "274da511-d0c7-488b-9e01-b2c9b5eaf8ad" 00:24:52.860 ], 00:24:52.860 "product_name": "Malloc disk", 00:24:52.860 "block_size": 512, 00:24:52.860 "num_blocks": 65536, 00:24:52.860 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:52.860 "assigned_rate_limits": { 00:24:52.860 "rw_ios_per_sec": 0, 00:24:52.860 "rw_mbytes_per_sec": 0, 00:24:52.860 "r_mbytes_per_sec": 0, 00:24:52.860 "w_mbytes_per_sec": 0 00:24:52.860 }, 00:24:52.860 "claimed": false, 00:24:52.860 "zoned": false, 00:24:52.860 "supported_io_types": { 00:24:52.860 "read": true, 00:24:52.860 "write": true, 00:24:52.860 "unmap": true, 00:24:52.860 "flush": true, 00:24:52.860 "reset": true, 00:24:52.860 "nvme_admin": false, 00:24:52.860 "nvme_io": false, 00:24:52.860 "nvme_io_md": false, 00:24:52.860 "write_zeroes": true, 00:24:52.860 "zcopy": true, 00:24:52.860 "get_zone_info": false, 00:24:52.860 "zone_management": false, 00:24:52.860 "zone_append": false, 00:24:52.860 "compare": false, 00:24:52.860 "compare_and_write": false, 00:24:52.860 "abort": true, 00:24:52.860 "seek_hole": false, 00:24:52.860 "seek_data": false, 00:24:52.860 "copy": true, 00:24:52.860 "nvme_iov_md": false 00:24:52.860 }, 00:24:52.860 "memory_domains": [ 00:24:52.860 { 00:24:52.860 "dma_device_id": "system", 00:24:52.860 "dma_device_type": 1 00:24:52.860 }, 00:24:52.860 { 00:24:52.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.860 "dma_device_type": 2 00:24:52.860 } 00:24:52.860 ], 00:24:52.860 "driver_specific": {} 00:24:52.860 } 00:24:52.860 ] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 BaseBdev3 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.860 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 [ 00:24:53.120 { 00:24:53.120 "name": "BaseBdev3", 00:24:53.120 "aliases": [ 00:24:53.120 "6a6801b4-5481-469a-b930-3f60cae3f729" 00:24:53.120 ], 00:24:53.120 "product_name": "Malloc disk", 00:24:53.120 "block_size": 512, 00:24:53.120 "num_blocks": 65536, 00:24:53.120 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:53.120 "assigned_rate_limits": { 00:24:53.120 "rw_ios_per_sec": 0, 00:24:53.120 "rw_mbytes_per_sec": 0, 00:24:53.120 "r_mbytes_per_sec": 0, 00:24:53.120 "w_mbytes_per_sec": 0 00:24:53.120 }, 00:24:53.120 "claimed": false, 00:24:53.120 "zoned": false, 00:24:53.120 "supported_io_types": { 00:24:53.120 "read": true, 00:24:53.120 "write": true, 00:24:53.120 "unmap": true, 00:24:53.120 "flush": true, 00:24:53.120 "reset": true, 00:24:53.120 "nvme_admin": false, 00:24:53.120 "nvme_io": false, 00:24:53.120 "nvme_io_md": false, 00:24:53.120 "write_zeroes": true, 00:24:53.120 "zcopy": true, 00:24:53.120 "get_zone_info": false, 00:24:53.120 "zone_management": false, 00:24:53.120 "zone_append": false, 00:24:53.120 "compare": false, 00:24:53.120 "compare_and_write": false, 00:24:53.120 "abort": true, 00:24:53.120 "seek_hole": false, 00:24:53.120 "seek_data": false, 00:24:53.120 "copy": true, 00:24:53.120 "nvme_iov_md": false 00:24:53.120 }, 00:24:53.120 "memory_domains": [ 00:24:53.120 { 00:24:53.120 "dma_device_id": "system", 00:24:53.120 "dma_device_type": 1 00:24:53.120 }, 00:24:53.120 { 00:24:53.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.120 "dma_device_type": 2 00:24:53.120 } 00:24:53.120 ], 00:24:53.120 "driver_specific": {} 00:24:53.120 } 00:24:53.120 ] 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 17:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 [2024-11-26 17:21:23.004901] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:53.120 [2024-11-26 17:21:23.005194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:53.120 [2024-11-26 17:21:23.005363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.120 [2024-11-26 17:21:23.007790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.120 "name": "Existed_Raid", 00:24:53.120 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:53.120 "strip_size_kb": 64, 00:24:53.120 "state": "configuring", 00:24:53.120 "raid_level": "raid0", 00:24:53.120 "superblock": true, 00:24:53.120 "num_base_bdevs": 3, 00:24:53.120 "num_base_bdevs_discovered": 2, 00:24:53.120 "num_base_bdevs_operational": 3, 00:24:53.120 "base_bdevs_list": [ 00:24:53.120 { 00:24:53.120 "name": "BaseBdev1", 00:24:53.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.120 "is_configured": false, 00:24:53.120 "data_offset": 0, 00:24:53.120 "data_size": 0 00:24:53.120 }, 00:24:53.120 { 00:24:53.120 "name": "BaseBdev2", 00:24:53.120 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:53.120 "is_configured": true, 00:24:53.120 "data_offset": 2048, 00:24:53.120 "data_size": 63488 00:24:53.120 }, 00:24:53.120 { 00:24:53.120 "name": "BaseBdev3", 00:24:53.120 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:53.120 "is_configured": true, 00:24:53.120 "data_offset": 2048, 00:24:53.120 "data_size": 63488 00:24:53.120 } 00:24:53.120 ] 00:24:53.120 }' 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.120 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.379 [2024-11-26 17:21:23.424763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.379 "name": "Existed_Raid", 00:24:53.379 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:53.379 "strip_size_kb": 64, 00:24:53.379 "state": "configuring", 00:24:53.379 "raid_level": "raid0", 00:24:53.379 "superblock": true, 00:24:53.379 "num_base_bdevs": 3, 00:24:53.379 "num_base_bdevs_discovered": 1, 00:24:53.379 "num_base_bdevs_operational": 3, 00:24:53.379 "base_bdevs_list": [ 00:24:53.379 { 00:24:53.379 "name": "BaseBdev1", 00:24:53.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.379 "is_configured": false, 00:24:53.379 "data_offset": 0, 00:24:53.379 "data_size": 0 00:24:53.379 }, 00:24:53.379 { 00:24:53.379 "name": null, 00:24:53.379 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:53.379 "is_configured": false, 00:24:53.379 "data_offset": 0, 00:24:53.379 "data_size": 63488 00:24:53.379 }, 00:24:53.379 { 00:24:53.379 "name": "BaseBdev3", 00:24:53.379 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:53.379 "is_configured": true, 00:24:53.379 "data_offset": 2048, 00:24:53.379 "data_size": 63488 00:24:53.379 } 00:24:53.379 ] 00:24:53.379 }' 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.379 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 [2024-11-26 17:21:23.933312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:53.988 BaseBdev1 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.988 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 [ 00:24:53.988 { 00:24:53.988 "name": "BaseBdev1", 00:24:53.988 "aliases": [ 00:24:53.988 "ab7f379b-ca86-457a-ab0c-013c881d3313" 00:24:53.988 ], 00:24:53.988 "product_name": "Malloc disk", 00:24:53.988 "block_size": 512, 00:24:53.988 "num_blocks": 65536, 00:24:53.988 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:53.988 "assigned_rate_limits": { 00:24:53.988 "rw_ios_per_sec": 0, 00:24:53.988 "rw_mbytes_per_sec": 0, 00:24:53.988 "r_mbytes_per_sec": 0, 00:24:53.988 "w_mbytes_per_sec": 0 00:24:53.988 }, 00:24:53.988 "claimed": true, 00:24:53.988 "claim_type": "exclusive_write", 00:24:53.988 "zoned": false, 00:24:53.988 "supported_io_types": { 00:24:53.988 "read": true, 00:24:53.988 "write": true, 00:24:53.988 "unmap": true, 00:24:53.988 "flush": true, 00:24:53.988 "reset": true, 00:24:53.988 "nvme_admin": false, 00:24:53.988 "nvme_io": false, 00:24:53.988 "nvme_io_md": false, 00:24:53.988 "write_zeroes": true, 00:24:53.988 "zcopy": true, 00:24:53.988 "get_zone_info": false, 00:24:53.988 "zone_management": false, 00:24:53.988 "zone_append": false, 00:24:53.988 "compare": false, 00:24:53.989 "compare_and_write": false, 00:24:53.989 "abort": true, 00:24:53.989 "seek_hole": false, 00:24:53.989 "seek_data": false, 00:24:53.989 "copy": true, 00:24:53.989 "nvme_iov_md": false 00:24:53.989 }, 00:24:53.989 "memory_domains": [ 00:24:53.989 { 00:24:53.989 "dma_device_id": "system", 00:24:53.989 "dma_device_type": 1 00:24:53.989 }, 00:24:53.989 { 00:24:53.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.989 "dma_device_type": 2 00:24:53.989 } 00:24:53.989 ], 00:24:53.989 "driver_specific": {} 00:24:53.989 } 00:24:53.989 ] 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.989 17:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.989 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.989 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.989 "name": "Existed_Raid", 00:24:53.989 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:53.989 "strip_size_kb": 64, 00:24:53.989 "state": "configuring", 00:24:53.989 "raid_level": "raid0", 00:24:53.989 "superblock": true, 00:24:53.989 "num_base_bdevs": 3, 00:24:53.989 "num_base_bdevs_discovered": 2, 00:24:53.989 "num_base_bdevs_operational": 3, 00:24:53.989 "base_bdevs_list": [ 00:24:53.989 { 00:24:53.989 "name": "BaseBdev1", 00:24:53.989 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:53.989 "is_configured": true, 00:24:53.989 "data_offset": 2048, 00:24:53.989 "data_size": 63488 00:24:53.989 }, 00:24:53.989 { 00:24:53.989 "name": null, 00:24:53.989 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:53.989 "is_configured": false, 00:24:53.989 "data_offset": 0, 00:24:53.989 "data_size": 63488 00:24:53.989 }, 00:24:53.989 { 00:24:53.989 "name": "BaseBdev3", 00:24:53.989 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:53.989 "is_configured": true, 00:24:53.989 "data_offset": 2048, 00:24:53.989 "data_size": 63488 00:24:53.989 } 00:24:53.989 ] 00:24:53.989 }' 00:24:53.989 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.989 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.557 [2024-11-26 17:21:24.428677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.557 "name": "Existed_Raid", 00:24:54.557 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:54.557 "strip_size_kb": 64, 00:24:54.557 "state": "configuring", 00:24:54.557 "raid_level": "raid0", 00:24:54.557 "superblock": true, 00:24:54.557 "num_base_bdevs": 3, 00:24:54.557 "num_base_bdevs_discovered": 1, 00:24:54.557 "num_base_bdevs_operational": 3, 00:24:54.557 "base_bdevs_list": [ 00:24:54.557 { 00:24:54.557 "name": "BaseBdev1", 00:24:54.557 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:54.557 "is_configured": true, 00:24:54.557 "data_offset": 2048, 00:24:54.557 "data_size": 63488 00:24:54.557 }, 00:24:54.557 { 00:24:54.557 "name": null, 00:24:54.557 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:54.557 "is_configured": false, 00:24:54.557 "data_offset": 0, 00:24:54.557 "data_size": 63488 00:24:54.557 }, 00:24:54.557 { 00:24:54.557 "name": null, 00:24:54.557 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:54.557 "is_configured": false, 00:24:54.557 "data_offset": 0, 00:24:54.557 "data_size": 63488 00:24:54.557 } 00:24:54.557 ] 00:24:54.557 }' 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.557 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.816 [2024-11-26 17:21:24.860229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.816 "name": "Existed_Raid", 00:24:54.816 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:54.816 "strip_size_kb": 64, 00:24:54.816 "state": "configuring", 00:24:54.816 "raid_level": "raid0", 00:24:54.816 "superblock": true, 00:24:54.816 "num_base_bdevs": 3, 00:24:54.816 "num_base_bdevs_discovered": 2, 00:24:54.816 "num_base_bdevs_operational": 3, 00:24:54.816 "base_bdevs_list": [ 00:24:54.816 { 00:24:54.816 "name": "BaseBdev1", 00:24:54.816 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:54.816 "is_configured": true, 00:24:54.816 "data_offset": 2048, 00:24:54.816 "data_size": 63488 00:24:54.816 }, 00:24:54.816 { 00:24:54.816 "name": null, 00:24:54.816 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:54.816 "is_configured": false, 00:24:54.816 "data_offset": 0, 00:24:54.816 "data_size": 63488 00:24:54.816 }, 00:24:54.816 { 00:24:54.816 "name": "BaseBdev3", 00:24:54.816 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:54.816 "is_configured": true, 00:24:54.816 "data_offset": 2048, 00:24:54.816 "data_size": 63488 00:24:54.816 } 00:24:54.816 ] 00:24:54.816 }' 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.816 17:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.383 [2024-11-26 17:21:25.295711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.383 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.384 "name": "Existed_Raid", 00:24:55.384 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:55.384 "strip_size_kb": 64, 00:24:55.384 "state": "configuring", 00:24:55.384 "raid_level": "raid0", 00:24:55.384 "superblock": true, 00:24:55.384 "num_base_bdevs": 3, 00:24:55.384 "num_base_bdevs_discovered": 1, 00:24:55.384 "num_base_bdevs_operational": 3, 00:24:55.384 "base_bdevs_list": [ 00:24:55.384 { 00:24:55.384 "name": null, 00:24:55.384 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:55.384 "is_configured": false, 00:24:55.384 "data_offset": 0, 00:24:55.384 "data_size": 63488 00:24:55.384 }, 00:24:55.384 { 00:24:55.384 "name": null, 00:24:55.384 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:55.384 "is_configured": false, 00:24:55.384 "data_offset": 0, 00:24:55.384 "data_size": 63488 00:24:55.384 }, 00:24:55.384 { 00:24:55.384 "name": "BaseBdev3", 00:24:55.384 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:55.384 "is_configured": true, 00:24:55.384 "data_offset": 2048, 00:24:55.384 "data_size": 63488 00:24:55.384 } 00:24:55.384 ] 00:24:55.384 }' 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.384 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.951 [2024-11-26 17:21:25.834794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:55.951 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.952 "name": "Existed_Raid", 00:24:55.952 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:55.952 "strip_size_kb": 64, 00:24:55.952 "state": "configuring", 00:24:55.952 "raid_level": "raid0", 00:24:55.952 "superblock": true, 00:24:55.952 "num_base_bdevs": 3, 00:24:55.952 "num_base_bdevs_discovered": 2, 00:24:55.952 "num_base_bdevs_operational": 3, 00:24:55.952 "base_bdevs_list": [ 00:24:55.952 { 00:24:55.952 "name": null, 00:24:55.952 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:55.952 "is_configured": false, 00:24:55.952 "data_offset": 0, 00:24:55.952 "data_size": 63488 00:24:55.952 }, 00:24:55.952 { 00:24:55.952 "name": "BaseBdev2", 00:24:55.952 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:55.952 "is_configured": true, 00:24:55.952 "data_offset": 2048, 00:24:55.952 "data_size": 63488 00:24:55.952 }, 00:24:55.952 { 00:24:55.952 "name": "BaseBdev3", 00:24:55.952 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:55.952 "is_configured": true, 00:24:55.952 "data_offset": 2048, 00:24:55.952 "data_size": 63488 00:24:55.952 } 00:24:55.952 ] 00:24:55.952 }' 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.952 17:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:56.211 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab7f379b-ca86-457a-ab0c-013c881d3313 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.470 [2024-11-26 17:21:26.428685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:56.470 [2024-11-26 17:21:26.429109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:56.470 [2024-11-26 17:21:26.429137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:56.470 [2024-11-26 17:21:26.429417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:56.470 [2024-11-26 17:21:26.429590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:56.470 [2024-11-26 17:21:26.429602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:56.470 NewBaseBdev 00:24:56.470 [2024-11-26 17:21:26.429739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.470 [ 00:24:56.470 { 00:24:56.470 "name": "NewBaseBdev", 00:24:56.470 "aliases": [ 00:24:56.470 "ab7f379b-ca86-457a-ab0c-013c881d3313" 00:24:56.470 ], 00:24:56.470 "product_name": "Malloc disk", 00:24:56.470 "block_size": 512, 00:24:56.470 "num_blocks": 65536, 00:24:56.470 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:56.470 "assigned_rate_limits": { 00:24:56.470 "rw_ios_per_sec": 0, 00:24:56.470 "rw_mbytes_per_sec": 0, 00:24:56.470 "r_mbytes_per_sec": 0, 00:24:56.470 "w_mbytes_per_sec": 0 00:24:56.470 }, 00:24:56.470 "claimed": true, 00:24:56.470 "claim_type": "exclusive_write", 00:24:56.470 "zoned": false, 00:24:56.470 "supported_io_types": { 00:24:56.470 "read": true, 00:24:56.470 "write": true, 00:24:56.470 "unmap": true, 00:24:56.470 "flush": true, 00:24:56.470 "reset": true, 00:24:56.470 "nvme_admin": false, 00:24:56.470 "nvme_io": false, 00:24:56.470 "nvme_io_md": false, 00:24:56.470 "write_zeroes": true, 00:24:56.470 "zcopy": true, 00:24:56.470 "get_zone_info": false, 00:24:56.470 "zone_management": false, 00:24:56.470 "zone_append": false, 00:24:56.470 "compare": false, 00:24:56.470 "compare_and_write": false, 00:24:56.470 "abort": true, 00:24:56.470 "seek_hole": false, 00:24:56.470 "seek_data": false, 00:24:56.470 "copy": true, 00:24:56.470 "nvme_iov_md": false 00:24:56.470 }, 00:24:56.470 "memory_domains": [ 00:24:56.470 { 00:24:56.470 "dma_device_id": "system", 00:24:56.470 "dma_device_type": 1 00:24:56.470 }, 00:24:56.470 { 00:24:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.470 "dma_device_type": 2 00:24:56.470 } 00:24:56.470 ], 00:24:56.470 "driver_specific": {} 00:24:56.470 } 00:24:56.470 ] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:56.470 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.471 "name": "Existed_Raid", 00:24:56.471 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:56.471 "strip_size_kb": 64, 00:24:56.471 "state": "online", 00:24:56.471 "raid_level": "raid0", 00:24:56.471 "superblock": true, 00:24:56.471 "num_base_bdevs": 3, 00:24:56.471 "num_base_bdevs_discovered": 3, 00:24:56.471 "num_base_bdevs_operational": 3, 00:24:56.471 "base_bdevs_list": [ 00:24:56.471 { 00:24:56.471 "name": "NewBaseBdev", 00:24:56.471 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:56.471 "is_configured": true, 00:24:56.471 "data_offset": 2048, 00:24:56.471 "data_size": 63488 00:24:56.471 }, 00:24:56.471 { 00:24:56.471 "name": "BaseBdev2", 00:24:56.471 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:56.471 "is_configured": true, 00:24:56.471 "data_offset": 2048, 00:24:56.471 "data_size": 63488 00:24:56.471 }, 00:24:56.471 { 00:24:56.471 "name": "BaseBdev3", 00:24:56.471 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:56.471 "is_configured": true, 00:24:56.471 "data_offset": 2048, 00:24:56.471 "data_size": 63488 00:24:56.471 } 00:24:56.471 ] 00:24:56.471 }' 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.471 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.039 [2024-11-26 17:21:26.916731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.039 "name": "Existed_Raid", 00:24:57.039 "aliases": [ 00:24:57.039 "629ddded-f1e6-43cf-9709-e996c86652d2" 00:24:57.039 ], 00:24:57.039 "product_name": "Raid Volume", 00:24:57.039 "block_size": 512, 00:24:57.039 "num_blocks": 190464, 00:24:57.039 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:57.039 "assigned_rate_limits": { 00:24:57.039 "rw_ios_per_sec": 0, 00:24:57.039 "rw_mbytes_per_sec": 0, 00:24:57.039 "r_mbytes_per_sec": 0, 00:24:57.039 "w_mbytes_per_sec": 0 00:24:57.039 }, 00:24:57.039 "claimed": false, 00:24:57.039 "zoned": false, 00:24:57.039 "supported_io_types": { 00:24:57.039 "read": true, 00:24:57.039 "write": true, 00:24:57.039 "unmap": true, 00:24:57.039 "flush": true, 00:24:57.039 "reset": true, 00:24:57.039 "nvme_admin": false, 00:24:57.039 "nvme_io": false, 00:24:57.039 "nvme_io_md": false, 00:24:57.039 "write_zeroes": true, 00:24:57.039 "zcopy": false, 00:24:57.039 "get_zone_info": false, 00:24:57.039 "zone_management": false, 00:24:57.039 "zone_append": false, 00:24:57.039 "compare": false, 00:24:57.039 "compare_and_write": false, 00:24:57.039 "abort": false, 00:24:57.039 "seek_hole": false, 00:24:57.039 "seek_data": false, 00:24:57.039 "copy": false, 00:24:57.039 "nvme_iov_md": false 00:24:57.039 }, 00:24:57.039 "memory_domains": [ 00:24:57.039 { 00:24:57.039 "dma_device_id": "system", 00:24:57.039 "dma_device_type": 1 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.039 "dma_device_type": 2 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "dma_device_id": "system", 00:24:57.039 "dma_device_type": 1 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.039 "dma_device_type": 2 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "dma_device_id": "system", 00:24:57.039 "dma_device_type": 1 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.039 "dma_device_type": 2 00:24:57.039 } 00:24:57.039 ], 00:24:57.039 "driver_specific": { 00:24:57.039 "raid": { 00:24:57.039 "uuid": "629ddded-f1e6-43cf-9709-e996c86652d2", 00:24:57.039 "strip_size_kb": 64, 00:24:57.039 "state": "online", 00:24:57.039 "raid_level": "raid0", 00:24:57.039 "superblock": true, 00:24:57.039 "num_base_bdevs": 3, 00:24:57.039 "num_base_bdevs_discovered": 3, 00:24:57.039 "num_base_bdevs_operational": 3, 00:24:57.039 "base_bdevs_list": [ 00:24:57.039 { 00:24:57.039 "name": "NewBaseBdev", 00:24:57.039 "uuid": "ab7f379b-ca86-457a-ab0c-013c881d3313", 00:24:57.039 "is_configured": true, 00:24:57.039 "data_offset": 2048, 00:24:57.039 "data_size": 63488 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "name": "BaseBdev2", 00:24:57.039 "uuid": "274da511-d0c7-488b-9e01-b2c9b5eaf8ad", 00:24:57.039 "is_configured": true, 00:24:57.039 "data_offset": 2048, 00:24:57.039 "data_size": 63488 00:24:57.039 }, 00:24:57.039 { 00:24:57.039 "name": "BaseBdev3", 00:24:57.039 "uuid": "6a6801b4-5481-469a-b930-3f60cae3f729", 00:24:57.039 "is_configured": true, 00:24:57.039 "data_offset": 2048, 00:24:57.039 "data_size": 63488 00:24:57.039 } 00:24:57.039 ] 00:24:57.039 } 00:24:57.039 } 00:24:57.039 }' 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:57.039 BaseBdev2 00:24:57.039 BaseBdev3' 00:24:57.039 17:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.039 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.040 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.298 [2024-11-26 17:21:27.192020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.298 [2024-11-26 17:21:27.192174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:57.298 [2024-11-26 17:21:27.192309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:57.298 [2024-11-26 17:21:27.192371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:57.298 [2024-11-26 17:21:27.192387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64541 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64541 ']' 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64541 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64541 00:24:57.298 killing process with pid 64541 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.298 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64541' 00:24:57.299 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64541 00:24:57.299 [2024-11-26 17:21:27.243270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:57.299 17:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64541 00:24:57.557 [2024-11-26 17:21:27.557729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:58.954 ************************************ 00:24:58.954 END TEST raid_state_function_test_sb 00:24:58.954 ************************************ 00:24:58.954 17:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:58.954 00:24:58.954 real 0m10.471s 00:24:58.954 user 0m16.405s 00:24:58.954 sys 0m2.134s 00:24:58.954 17:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.954 17:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.954 17:21:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:24:58.954 17:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:58.954 17:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.954 17:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:58.954 ************************************ 00:24:58.954 START TEST raid_superblock_test 00:24:58.954 ************************************ 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:58.954 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65167 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65167 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65167 ']' 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.955 17:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.955 [2024-11-26 17:21:28.925399] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:24:58.955 [2024-11-26 17:21:28.925780] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65167 ] 00:24:59.213 [2024-11-26 17:21:29.109459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.213 [2024-11-26 17:21:29.253761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.471 [2024-11-26 17:21:29.477983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:59.471 [2024-11-26 17:21:29.478062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.729 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 malloc1 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 [2024-11-26 17:21:29.858159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:59.988 [2024-11-26 17:21:29.858235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.988 [2024-11-26 17:21:29.858265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:59.988 [2024-11-26 17:21:29.858277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.988 [2024-11-26 17:21:29.860909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.988 [2024-11-26 17:21:29.860950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:59.988 pt1 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 malloc2 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 [2024-11-26 17:21:29.918540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.988 [2024-11-26 17:21:29.918729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.988 [2024-11-26 17:21:29.918801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:59.988 [2024-11-26 17:21:29.918875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.988 [2024-11-26 17:21:29.921597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.988 [2024-11-26 17:21:29.921745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.988 pt2 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 malloc3 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 [2024-11-26 17:21:29.989202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:59.988 [2024-11-26 17:21:29.989409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.988 [2024-11-26 17:21:29.989490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:59.988 [2024-11-26 17:21:29.989585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.988 [2024-11-26 17:21:29.992301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.988 [2024-11-26 17:21:29.992454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:59.988 pt3 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 [2024-11-26 17:21:30.001383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:59.988 [2024-11-26 17:21:30.003924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:59.988 [2024-11-26 17:21:30.003998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:59.988 [2024-11-26 17:21:30.004212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:59.988 [2024-11-26 17:21:30.004229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:59.988 [2024-11-26 17:21:30.004565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:59.988 [2024-11-26 17:21:30.004766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:59.988 [2024-11-26 17:21:30.004777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:59.988 [2024-11-26 17:21:30.004974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.988 "name": "raid_bdev1", 00:24:59.988 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:24:59.988 "strip_size_kb": 64, 00:24:59.988 "state": "online", 00:24:59.988 "raid_level": "raid0", 00:24:59.988 "superblock": true, 00:24:59.988 "num_base_bdevs": 3, 00:24:59.988 "num_base_bdevs_discovered": 3, 00:24:59.988 "num_base_bdevs_operational": 3, 00:24:59.988 "base_bdevs_list": [ 00:24:59.988 { 00:24:59.988 "name": "pt1", 00:24:59.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:59.988 "is_configured": true, 00:24:59.988 "data_offset": 2048, 00:24:59.988 "data_size": 63488 00:24:59.988 }, 00:24:59.988 { 00:24:59.988 "name": "pt2", 00:24:59.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:59.988 "is_configured": true, 00:24:59.988 "data_offset": 2048, 00:24:59.988 "data_size": 63488 00:24:59.988 }, 00:24:59.988 { 00:24:59.988 "name": "pt3", 00:24:59.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:59.988 "is_configured": true, 00:24:59.988 "data_offset": 2048, 00:24:59.988 "data_size": 63488 00:24:59.988 } 00:24:59.988 ] 00:24:59.988 }' 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.988 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.556 [2024-11-26 17:21:30.421061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.556 "name": "raid_bdev1", 00:25:00.556 "aliases": [ 00:25:00.556 "84bea15d-510e-4521-9671-20ff6012310e" 00:25:00.556 ], 00:25:00.556 "product_name": "Raid Volume", 00:25:00.556 "block_size": 512, 00:25:00.556 "num_blocks": 190464, 00:25:00.556 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:00.556 "assigned_rate_limits": { 00:25:00.556 "rw_ios_per_sec": 0, 00:25:00.556 "rw_mbytes_per_sec": 0, 00:25:00.556 "r_mbytes_per_sec": 0, 00:25:00.556 "w_mbytes_per_sec": 0 00:25:00.556 }, 00:25:00.556 "claimed": false, 00:25:00.556 "zoned": false, 00:25:00.556 "supported_io_types": { 00:25:00.556 "read": true, 00:25:00.556 "write": true, 00:25:00.556 "unmap": true, 00:25:00.556 "flush": true, 00:25:00.556 "reset": true, 00:25:00.556 "nvme_admin": false, 00:25:00.556 "nvme_io": false, 00:25:00.556 "nvme_io_md": false, 00:25:00.556 "write_zeroes": true, 00:25:00.556 "zcopy": false, 00:25:00.556 "get_zone_info": false, 00:25:00.556 "zone_management": false, 00:25:00.556 "zone_append": false, 00:25:00.556 "compare": false, 00:25:00.556 "compare_and_write": false, 00:25:00.556 "abort": false, 00:25:00.556 "seek_hole": false, 00:25:00.556 "seek_data": false, 00:25:00.556 "copy": false, 00:25:00.556 "nvme_iov_md": false 00:25:00.556 }, 00:25:00.556 "memory_domains": [ 00:25:00.556 { 00:25:00.556 "dma_device_id": "system", 00:25:00.556 "dma_device_type": 1 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.556 "dma_device_type": 2 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "dma_device_id": "system", 00:25:00.556 "dma_device_type": 1 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.556 "dma_device_type": 2 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "dma_device_id": "system", 00:25:00.556 "dma_device_type": 1 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.556 "dma_device_type": 2 00:25:00.556 } 00:25:00.556 ], 00:25:00.556 "driver_specific": { 00:25:00.556 "raid": { 00:25:00.556 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:00.556 "strip_size_kb": 64, 00:25:00.556 "state": "online", 00:25:00.556 "raid_level": "raid0", 00:25:00.556 "superblock": true, 00:25:00.556 "num_base_bdevs": 3, 00:25:00.556 "num_base_bdevs_discovered": 3, 00:25:00.556 "num_base_bdevs_operational": 3, 00:25:00.556 "base_bdevs_list": [ 00:25:00.556 { 00:25:00.556 "name": "pt1", 00:25:00.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:00.556 "is_configured": true, 00:25:00.556 "data_offset": 2048, 00:25:00.556 "data_size": 63488 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "name": "pt2", 00:25:00.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.556 "is_configured": true, 00:25:00.556 "data_offset": 2048, 00:25:00.556 "data_size": 63488 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "name": "pt3", 00:25:00.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:00.556 "is_configured": true, 00:25:00.556 "data_offset": 2048, 00:25:00.556 "data_size": 63488 00:25:00.556 } 00:25:00.556 ] 00:25:00.556 } 00:25:00.556 } 00:25:00.556 }' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:00.556 pt2 00:25:00.556 pt3' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.556 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.557 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.816 [2024-11-26 17:21:30.696898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=84bea15d-510e-4521-9671-20ff6012310e 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 84bea15d-510e-4521-9671-20ff6012310e ']' 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.816 [2024-11-26 17:21:30.740550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.816 [2024-11-26 17:21:30.740586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.816 [2024-11-26 17:21:30.740691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.816 [2024-11-26 17:21:30.740763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.816 [2024-11-26 17:21:30.740775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:00.816 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 [2024-11-26 17:21:30.876411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:00.817 [2024-11-26 17:21:30.878870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:00.817 [2024-11-26 17:21:30.878932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:00.817 [2024-11-26 17:21:30.878997] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:00.817 [2024-11-26 17:21:30.879073] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:00.817 [2024-11-26 17:21:30.879096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:00.817 [2024-11-26 17:21:30.879120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.817 [2024-11-26 17:21:30.879134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:00.817 request: 00:25:00.817 { 00:25:00.817 "name": "raid_bdev1", 00:25:00.817 "raid_level": "raid0", 00:25:00.817 "base_bdevs": [ 00:25:00.817 "malloc1", 00:25:00.817 "malloc2", 00:25:00.817 "malloc3" 00:25:00.817 ], 00:25:00.817 "strip_size_kb": 64, 00:25:00.817 "superblock": false, 00:25:00.817 "method": "bdev_raid_create", 00:25:00.817 "req_id": 1 00:25:00.817 } 00:25:00.817 Got JSON-RPC error response 00:25:00.817 response: 00:25:00.817 { 00:25:00.817 "code": -17, 00:25:00.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:00.817 } 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.817 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 [2024-11-26 17:21:30.932261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:01.076 [2024-11-26 17:21:30.932341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.076 [2024-11-26 17:21:30.932370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:01.076 [2024-11-26 17:21:30.932382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.076 [2024-11-26 17:21:30.935187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.076 [2024-11-26 17:21:30.935231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:01.076 [2024-11-26 17:21:30.935338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:01.076 [2024-11-26 17:21:30.935397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:01.076 pt1 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.076 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.076 "name": "raid_bdev1", 00:25:01.076 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:01.076 "strip_size_kb": 64, 00:25:01.076 "state": "configuring", 00:25:01.077 "raid_level": "raid0", 00:25:01.077 "superblock": true, 00:25:01.077 "num_base_bdevs": 3, 00:25:01.077 "num_base_bdevs_discovered": 1, 00:25:01.077 "num_base_bdevs_operational": 3, 00:25:01.077 "base_bdevs_list": [ 00:25:01.077 { 00:25:01.077 "name": "pt1", 00:25:01.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.077 "is_configured": true, 00:25:01.077 "data_offset": 2048, 00:25:01.077 "data_size": 63488 00:25:01.077 }, 00:25:01.077 { 00:25:01.077 "name": null, 00:25:01.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.077 "is_configured": false, 00:25:01.077 "data_offset": 2048, 00:25:01.077 "data_size": 63488 00:25:01.077 }, 00:25:01.077 { 00:25:01.077 "name": null, 00:25:01.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.077 "is_configured": false, 00:25:01.077 "data_offset": 2048, 00:25:01.077 "data_size": 63488 00:25:01.077 } 00:25:01.077 ] 00:25:01.077 }' 00:25:01.077 17:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.077 17:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.334 [2024-11-26 17:21:31.339704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:01.334 [2024-11-26 17:21:31.339792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.334 [2024-11-26 17:21:31.339827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:01.334 [2024-11-26 17:21:31.339839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.334 [2024-11-26 17:21:31.340343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.334 [2024-11-26 17:21:31.340363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:01.334 [2024-11-26 17:21:31.340462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:01.334 [2024-11-26 17:21:31.340492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:01.334 pt2 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.334 [2024-11-26 17:21:31.351690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:01.334 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.335 "name": "raid_bdev1", 00:25:01.335 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:01.335 "strip_size_kb": 64, 00:25:01.335 "state": "configuring", 00:25:01.335 "raid_level": "raid0", 00:25:01.335 "superblock": true, 00:25:01.335 "num_base_bdevs": 3, 00:25:01.335 "num_base_bdevs_discovered": 1, 00:25:01.335 "num_base_bdevs_operational": 3, 00:25:01.335 "base_bdevs_list": [ 00:25:01.335 { 00:25:01.335 "name": "pt1", 00:25:01.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.335 "is_configured": true, 00:25:01.335 "data_offset": 2048, 00:25:01.335 "data_size": 63488 00:25:01.335 }, 00:25:01.335 { 00:25:01.335 "name": null, 00:25:01.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.335 "is_configured": false, 00:25:01.335 "data_offset": 0, 00:25:01.335 "data_size": 63488 00:25:01.335 }, 00:25:01.335 { 00:25:01.335 "name": null, 00:25:01.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.335 "is_configured": false, 00:25:01.335 "data_offset": 2048, 00:25:01.335 "data_size": 63488 00:25:01.335 } 00:25:01.335 ] 00:25:01.335 }' 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.335 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.901 [2024-11-26 17:21:31.783300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:01.901 [2024-11-26 17:21:31.783390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.901 [2024-11-26 17:21:31.783418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:01.901 [2024-11-26 17:21:31.783433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.901 [2024-11-26 17:21:31.783973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.901 [2024-11-26 17:21:31.784006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:01.901 [2024-11-26 17:21:31.784102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:01.901 [2024-11-26 17:21:31.784130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:01.901 pt2 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.901 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.902 [2024-11-26 17:21:31.795262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:01.902 [2024-11-26 17:21:31.795323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.902 [2024-11-26 17:21:31.795344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:01.902 [2024-11-26 17:21:31.795360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.902 [2024-11-26 17:21:31.795827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.902 [2024-11-26 17:21:31.795862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:01.902 [2024-11-26 17:21:31.795944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:01.902 [2024-11-26 17:21:31.795971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:01.902 [2024-11-26 17:21:31.796102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:01.902 [2024-11-26 17:21:31.796116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:01.902 [2024-11-26 17:21:31.796389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:01.902 [2024-11-26 17:21:31.796554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:01.902 [2024-11-26 17:21:31.796564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:01.902 [2024-11-26 17:21:31.796706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.902 pt3 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.902 "name": "raid_bdev1", 00:25:01.902 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:01.902 "strip_size_kb": 64, 00:25:01.902 "state": "online", 00:25:01.902 "raid_level": "raid0", 00:25:01.902 "superblock": true, 00:25:01.902 "num_base_bdevs": 3, 00:25:01.902 "num_base_bdevs_discovered": 3, 00:25:01.902 "num_base_bdevs_operational": 3, 00:25:01.902 "base_bdevs_list": [ 00:25:01.902 { 00:25:01.902 "name": "pt1", 00:25:01.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.902 "is_configured": true, 00:25:01.902 "data_offset": 2048, 00:25:01.902 "data_size": 63488 00:25:01.902 }, 00:25:01.902 { 00:25:01.902 "name": "pt2", 00:25:01.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.902 "is_configured": true, 00:25:01.902 "data_offset": 2048, 00:25:01.902 "data_size": 63488 00:25:01.902 }, 00:25:01.902 { 00:25:01.902 "name": "pt3", 00:25:01.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.902 "is_configured": true, 00:25:01.902 "data_offset": 2048, 00:25:01.902 "data_size": 63488 00:25:01.902 } 00:25:01.902 ] 00:25:01.902 }' 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.902 17:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.159 [2024-11-26 17:21:32.219034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.159 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:02.159 "name": "raid_bdev1", 00:25:02.159 "aliases": [ 00:25:02.159 "84bea15d-510e-4521-9671-20ff6012310e" 00:25:02.159 ], 00:25:02.159 "product_name": "Raid Volume", 00:25:02.159 "block_size": 512, 00:25:02.159 "num_blocks": 190464, 00:25:02.160 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:02.160 "assigned_rate_limits": { 00:25:02.160 "rw_ios_per_sec": 0, 00:25:02.160 "rw_mbytes_per_sec": 0, 00:25:02.160 "r_mbytes_per_sec": 0, 00:25:02.160 "w_mbytes_per_sec": 0 00:25:02.160 }, 00:25:02.160 "claimed": false, 00:25:02.160 "zoned": false, 00:25:02.160 "supported_io_types": { 00:25:02.160 "read": true, 00:25:02.160 "write": true, 00:25:02.160 "unmap": true, 00:25:02.160 "flush": true, 00:25:02.160 "reset": true, 00:25:02.160 "nvme_admin": false, 00:25:02.160 "nvme_io": false, 00:25:02.160 "nvme_io_md": false, 00:25:02.160 "write_zeroes": true, 00:25:02.160 "zcopy": false, 00:25:02.160 "get_zone_info": false, 00:25:02.160 "zone_management": false, 00:25:02.160 "zone_append": false, 00:25:02.160 "compare": false, 00:25:02.160 "compare_and_write": false, 00:25:02.160 "abort": false, 00:25:02.160 "seek_hole": false, 00:25:02.160 "seek_data": false, 00:25:02.160 "copy": false, 00:25:02.160 "nvme_iov_md": false 00:25:02.160 }, 00:25:02.160 "memory_domains": [ 00:25:02.160 { 00:25:02.160 "dma_device_id": "system", 00:25:02.160 "dma_device_type": 1 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.160 "dma_device_type": 2 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "dma_device_id": "system", 00:25:02.160 "dma_device_type": 1 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.160 "dma_device_type": 2 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "dma_device_id": "system", 00:25:02.160 "dma_device_type": 1 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.160 "dma_device_type": 2 00:25:02.160 } 00:25:02.160 ], 00:25:02.160 "driver_specific": { 00:25:02.160 "raid": { 00:25:02.160 "uuid": "84bea15d-510e-4521-9671-20ff6012310e", 00:25:02.160 "strip_size_kb": 64, 00:25:02.160 "state": "online", 00:25:02.160 "raid_level": "raid0", 00:25:02.160 "superblock": true, 00:25:02.160 "num_base_bdevs": 3, 00:25:02.160 "num_base_bdevs_discovered": 3, 00:25:02.160 "num_base_bdevs_operational": 3, 00:25:02.160 "base_bdevs_list": [ 00:25:02.160 { 00:25:02.160 "name": "pt1", 00:25:02.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.160 "is_configured": true, 00:25:02.160 "data_offset": 2048, 00:25:02.160 "data_size": 63488 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "name": "pt2", 00:25:02.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.160 "is_configured": true, 00:25:02.160 "data_offset": 2048, 00:25:02.160 "data_size": 63488 00:25:02.160 }, 00:25:02.160 { 00:25:02.160 "name": "pt3", 00:25:02.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:02.160 "is_configured": true, 00:25:02.160 "data_offset": 2048, 00:25:02.160 "data_size": 63488 00:25:02.160 } 00:25:02.160 ] 00:25:02.160 } 00:25:02.160 } 00:25:02.160 }' 00:25:02.160 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:02.418 pt2 00:25:02.418 pt3' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:02.418 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.419 [2024-11-26 17:21:32.486639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 84bea15d-510e-4521-9671-20ff6012310e '!=' 84bea15d-510e-4521-9671-20ff6012310e ']' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65167 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65167 ']' 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65167 00:25:02.419 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65167 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.678 killing process with pid 65167 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65167' 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65167 00:25:02.678 [2024-11-26 17:21:32.568331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:02.678 [2024-11-26 17:21:32.568460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.678 17:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65167 00:25:02.678 [2024-11-26 17:21:32.568541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.678 [2024-11-26 17:21:32.568557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:02.937 [2024-11-26 17:21:32.885165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:04.354 17:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:04.354 00:25:04.354 real 0m5.242s 00:25:04.354 user 0m7.451s 00:25:04.354 sys 0m1.035s 00:25:04.354 17:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.354 ************************************ 00:25:04.354 END TEST raid_superblock_test 00:25:04.354 ************************************ 00:25:04.354 17:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.354 17:21:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:25:04.354 17:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:04.354 17:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.354 17:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:04.354 ************************************ 00:25:04.354 START TEST raid_read_error_test 00:25:04.354 ************************************ 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s9dEHdvwxi 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65420 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65420 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65420 ']' 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.354 17:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.354 [2024-11-26 17:21:34.249550] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:04.354 [2024-11-26 17:21:34.249700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65420 ] 00:25:04.354 [2024-11-26 17:21:34.431788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.613 [2024-11-26 17:21:34.572210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.872 [2024-11-26 17:21:34.790865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.872 [2024-11-26 17:21:34.790944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.132 BaseBdev1_malloc 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.132 true 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.132 [2024-11-26 17:21:35.226035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:05.132 [2024-11-26 17:21:35.226109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.132 [2024-11-26 17:21:35.226136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:05.132 [2024-11-26 17:21:35.226151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.132 [2024-11-26 17:21:35.228781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.132 [2024-11-26 17:21:35.228826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:05.132 BaseBdev1 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.132 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 BaseBdev2_malloc 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 true 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 [2024-11-26 17:21:35.297387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:05.392 [2024-11-26 17:21:35.297470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.392 [2024-11-26 17:21:35.297492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:05.392 [2024-11-26 17:21:35.297508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.392 [2024-11-26 17:21:35.300116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.392 [2024-11-26 17:21:35.300164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:05.392 BaseBdev2 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 BaseBdev3_malloc 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 true 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 [2024-11-26 17:21:35.381384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:05.392 [2024-11-26 17:21:35.381452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.392 [2024-11-26 17:21:35.381474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:05.392 [2024-11-26 17:21:35.381488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.392 [2024-11-26 17:21:35.384034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.392 [2024-11-26 17:21:35.384076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:05.392 BaseBdev3 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 [2024-11-26 17:21:35.393462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.392 [2024-11-26 17:21:35.395687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.392 [2024-11-26 17:21:35.395767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.392 [2024-11-26 17:21:35.395969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:05.392 [2024-11-26 17:21:35.395984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:05.392 [2024-11-26 17:21:35.396251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:25:05.392 [2024-11-26 17:21:35.396411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:05.392 [2024-11-26 17:21:35.396428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:05.392 [2024-11-26 17:21:35.396603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.392 "name": "raid_bdev1", 00:25:05.392 "uuid": "7bbfdd9b-3a32-4145-9ee7-b301148266dd", 00:25:05.392 "strip_size_kb": 64, 00:25:05.392 "state": "online", 00:25:05.392 "raid_level": "raid0", 00:25:05.392 "superblock": true, 00:25:05.392 "num_base_bdevs": 3, 00:25:05.392 "num_base_bdevs_discovered": 3, 00:25:05.392 "num_base_bdevs_operational": 3, 00:25:05.392 "base_bdevs_list": [ 00:25:05.392 { 00:25:05.392 "name": "BaseBdev1", 00:25:05.392 "uuid": "0dcdd9c5-3f2c-552b-84c0-11523c296ba7", 00:25:05.392 "is_configured": true, 00:25:05.392 "data_offset": 2048, 00:25:05.392 "data_size": 63488 00:25:05.392 }, 00:25:05.392 { 00:25:05.392 "name": "BaseBdev2", 00:25:05.392 "uuid": "6b03dcb8-ebcf-52bd-a911-bc173c3ae1dc", 00:25:05.392 "is_configured": true, 00:25:05.392 "data_offset": 2048, 00:25:05.392 "data_size": 63488 00:25:05.392 }, 00:25:05.392 { 00:25:05.392 "name": "BaseBdev3", 00:25:05.392 "uuid": "9655025e-84d3-5b95-b7d2-faa1840ecfac", 00:25:05.392 "is_configured": true, 00:25:05.392 "data_offset": 2048, 00:25:05.392 "data_size": 63488 00:25:05.392 } 00:25:05.392 ] 00:25:05.392 }' 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.392 17:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.961 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:05.961 17:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:05.961 [2024-11-26 17:21:35.898413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.899 "name": "raid_bdev1", 00:25:06.899 "uuid": "7bbfdd9b-3a32-4145-9ee7-b301148266dd", 00:25:06.899 "strip_size_kb": 64, 00:25:06.899 "state": "online", 00:25:06.899 "raid_level": "raid0", 00:25:06.899 "superblock": true, 00:25:06.899 "num_base_bdevs": 3, 00:25:06.899 "num_base_bdevs_discovered": 3, 00:25:06.899 "num_base_bdevs_operational": 3, 00:25:06.899 "base_bdevs_list": [ 00:25:06.899 { 00:25:06.899 "name": "BaseBdev1", 00:25:06.899 "uuid": "0dcdd9c5-3f2c-552b-84c0-11523c296ba7", 00:25:06.899 "is_configured": true, 00:25:06.899 "data_offset": 2048, 00:25:06.899 "data_size": 63488 00:25:06.899 }, 00:25:06.899 { 00:25:06.899 "name": "BaseBdev2", 00:25:06.899 "uuid": "6b03dcb8-ebcf-52bd-a911-bc173c3ae1dc", 00:25:06.899 "is_configured": true, 00:25:06.899 "data_offset": 2048, 00:25:06.899 "data_size": 63488 00:25:06.899 }, 00:25:06.899 { 00:25:06.899 "name": "BaseBdev3", 00:25:06.899 "uuid": "9655025e-84d3-5b95-b7d2-faa1840ecfac", 00:25:06.899 "is_configured": true, 00:25:06.899 "data_offset": 2048, 00:25:06.899 "data_size": 63488 00:25:06.899 } 00:25:06.899 ] 00:25:06.899 }' 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.899 17:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.159 [2024-11-26 17:21:37.219415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.159 [2024-11-26 17:21:37.219455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:07.159 [2024-11-26 17:21:37.222138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.159 [2024-11-26 17:21:37.222189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.159 [2024-11-26 17:21:37.222233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.159 [2024-11-26 17:21:37.222245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:07.159 { 00:25:07.159 "results": [ 00:25:07.159 { 00:25:07.159 "job": "raid_bdev1", 00:25:07.159 "core_mask": "0x1", 00:25:07.159 "workload": "randrw", 00:25:07.159 "percentage": 50, 00:25:07.159 "status": "finished", 00:25:07.159 "queue_depth": 1, 00:25:07.159 "io_size": 131072, 00:25:07.159 "runtime": 1.320768, 00:25:07.159 "iops": 15226.746862431555, 00:25:07.159 "mibps": 1903.3433578039444, 00:25:07.159 "io_failed": 1, 00:25:07.159 "io_timeout": 0, 00:25:07.159 "avg_latency_us": 91.38866620020255, 00:25:07.159 "min_latency_us": 26.936546184738955, 00:25:07.159 "max_latency_us": 1421.2626506024096 00:25:07.159 } 00:25:07.159 ], 00:25:07.159 "core_count": 1 00:25:07.159 } 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65420 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65420 ']' 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65420 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.159 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65420 00:25:07.418 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.418 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.418 killing process with pid 65420 00:25:07.418 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65420' 00:25:07.418 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65420 00:25:07.418 17:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65420 00:25:07.418 [2024-11-26 17:21:37.272976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:07.418 [2024-11-26 17:21:37.519797] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s9dEHdvwxi 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:25:08.797 00:25:08.797 real 0m4.635s 00:25:08.797 user 0m5.423s 00:25:08.797 sys 0m0.675s 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.797 17:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.797 ************************************ 00:25:08.797 END TEST raid_read_error_test 00:25:08.797 ************************************ 00:25:08.797 17:21:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:25:08.797 17:21:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:08.797 17:21:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.797 17:21:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:08.797 ************************************ 00:25:08.797 START TEST raid_write_error_test 00:25:08.797 ************************************ 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:08.797 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9bLHRENU1d 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65560 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65560 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65560 ']' 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.798 17:21:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.057 [2024-11-26 17:21:38.968050] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:09.057 [2024-11-26 17:21:38.968189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65560 ] 00:25:09.057 [2024-11-26 17:21:39.146961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.315 [2024-11-26 17:21:39.291236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.575 [2024-11-26 17:21:39.509883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.575 [2024-11-26 17:21:39.509938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 BaseBdev1_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 true 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 [2024-11-26 17:21:39.882821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:09.835 [2024-11-26 17:21:39.882887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.835 [2024-11-26 17:21:39.882912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:09.835 [2024-11-26 17:21:39.882928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.835 [2024-11-26 17:21:39.885510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.835 [2024-11-26 17:21:39.885563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:09.835 BaseBdev1 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 BaseBdev2_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 true 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.835 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 [2024-11-26 17:21:39.952101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:10.095 [2024-11-26 17:21:39.952164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.095 [2024-11-26 17:21:39.952185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:10.095 [2024-11-26 17:21:39.952200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.095 [2024-11-26 17:21:39.954823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.095 [2024-11-26 17:21:39.954867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:10.095 BaseBdev2 00:25:10.095 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:10.095 17:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:10.095 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.095 17:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 BaseBdev3_malloc 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 true 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 [2024-11-26 17:21:40.035565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:10.095 [2024-11-26 17:21:40.035630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.095 [2024-11-26 17:21:40.035656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:10.095 [2024-11-26 17:21:40.035671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.095 [2024-11-26 17:21:40.038374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.095 [2024-11-26 17:21:40.038422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:10.095 BaseBdev3 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 [2024-11-26 17:21:40.047660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.095 [2024-11-26 17:21:40.049994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:10.095 [2024-11-26 17:21:40.050097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:10.095 [2024-11-26 17:21:40.050317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:10.095 [2024-11-26 17:21:40.050334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:10.095 [2024-11-26 17:21:40.050669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:25:10.095 [2024-11-26 17:21:40.050845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:10.095 [2024-11-26 17:21:40.050862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:10.095 [2024-11-26 17:21:40.051026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.095 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.095 "name": "raid_bdev1", 00:25:10.095 "uuid": "7d69695d-c435-48b2-a567-1bce2593f803", 00:25:10.095 "strip_size_kb": 64, 00:25:10.095 "state": "online", 00:25:10.095 "raid_level": "raid0", 00:25:10.095 "superblock": true, 00:25:10.095 "num_base_bdevs": 3, 00:25:10.095 "num_base_bdevs_discovered": 3, 00:25:10.095 "num_base_bdevs_operational": 3, 00:25:10.095 "base_bdevs_list": [ 00:25:10.095 { 00:25:10.095 "name": "BaseBdev1", 00:25:10.095 "uuid": "9df35c9d-3d86-57b8-9f9a-3e3be35975fa", 00:25:10.095 "is_configured": true, 00:25:10.096 "data_offset": 2048, 00:25:10.096 "data_size": 63488 00:25:10.096 }, 00:25:10.096 { 00:25:10.096 "name": "BaseBdev2", 00:25:10.096 "uuid": "433d5845-eb89-5ad3-827d-30c62bcd1ea0", 00:25:10.096 "is_configured": true, 00:25:10.096 "data_offset": 2048, 00:25:10.096 "data_size": 63488 00:25:10.096 }, 00:25:10.096 { 00:25:10.096 "name": "BaseBdev3", 00:25:10.096 "uuid": "671ff111-bb8f-5e7b-b9d1-1241a26b012a", 00:25:10.096 "is_configured": true, 00:25:10.096 "data_offset": 2048, 00:25:10.096 "data_size": 63488 00:25:10.096 } 00:25:10.096 ] 00:25:10.096 }' 00:25:10.096 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.096 17:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.355 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:10.355 17:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:10.614 [2024-11-26 17:21:40.552373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.552 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.552 "name": "raid_bdev1", 00:25:11.552 "uuid": "7d69695d-c435-48b2-a567-1bce2593f803", 00:25:11.552 "strip_size_kb": 64, 00:25:11.552 "state": "online", 00:25:11.552 "raid_level": "raid0", 00:25:11.552 "superblock": true, 00:25:11.552 "num_base_bdevs": 3, 00:25:11.552 "num_base_bdevs_discovered": 3, 00:25:11.552 "num_base_bdevs_operational": 3, 00:25:11.553 "base_bdevs_list": [ 00:25:11.553 { 00:25:11.553 "name": "BaseBdev1", 00:25:11.553 "uuid": "9df35c9d-3d86-57b8-9f9a-3e3be35975fa", 00:25:11.553 "is_configured": true, 00:25:11.553 "data_offset": 2048, 00:25:11.553 "data_size": 63488 00:25:11.553 }, 00:25:11.553 { 00:25:11.553 "name": "BaseBdev2", 00:25:11.553 "uuid": "433d5845-eb89-5ad3-827d-30c62bcd1ea0", 00:25:11.553 "is_configured": true, 00:25:11.553 "data_offset": 2048, 00:25:11.553 "data_size": 63488 00:25:11.553 }, 00:25:11.553 { 00:25:11.553 "name": "BaseBdev3", 00:25:11.553 "uuid": "671ff111-bb8f-5e7b-b9d1-1241a26b012a", 00:25:11.553 "is_configured": true, 00:25:11.553 "data_offset": 2048, 00:25:11.553 "data_size": 63488 00:25:11.553 } 00:25:11.553 ] 00:25:11.553 }' 00:25:11.553 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.553 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.811 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:11.811 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.811 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.069 [2024-11-26 17:21:41.925288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:12.069 [2024-11-26 17:21:41.925327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:12.069 [2024-11-26 17:21:41.928009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:12.069 [2024-11-26 17:21:41.928063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.069 [2024-11-26 17:21:41.928107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:12.069 [2024-11-26 17:21:41.928119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:12.069 { 00:25:12.069 "results": [ 00:25:12.069 { 00:25:12.069 "job": "raid_bdev1", 00:25:12.069 "core_mask": "0x1", 00:25:12.069 "workload": "randrw", 00:25:12.069 "percentage": 50, 00:25:12.069 "status": "finished", 00:25:12.069 "queue_depth": 1, 00:25:12.069 "io_size": 131072, 00:25:12.069 "runtime": 1.372863, 00:25:12.069 "iops": 14667.887473112758, 00:25:12.069 "mibps": 1833.4859341390948, 00:25:12.069 "io_failed": 1, 00:25:12.069 "io_timeout": 0, 00:25:12.069 "avg_latency_us": 95.22462462821791, 00:25:12.069 "min_latency_us": 26.936546184738955, 00:25:12.069 "max_latency_us": 1487.0618473895581 00:25:12.069 } 00:25:12.069 ], 00:25:12.069 "core_count": 1 00:25:12.069 } 00:25:12.069 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.069 17:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65560 00:25:12.069 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65560 ']' 00:25:12.069 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65560 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65560 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.070 killing process with pid 65560 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65560' 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65560 00:25:12.070 [2024-11-26 17:21:41.981560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:12.070 17:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65560 00:25:12.329 [2024-11-26 17:21:42.225963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9bLHRENU1d 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:25:13.715 ************************************ 00:25:13.715 END TEST raid_write_error_test 00:25:13.715 ************************************ 00:25:13.715 00:25:13.715 real 0m4.633s 00:25:13.715 user 0m5.398s 00:25:13.715 sys 0m0.677s 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.715 17:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.715 17:21:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:13.715 17:21:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:25:13.715 17:21:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:13.715 17:21:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.715 17:21:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:13.715 ************************************ 00:25:13.715 START TEST raid_state_function_test 00:25:13.715 ************************************ 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:13.715 Process raid pid: 65704 00:25:13.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65704 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65704' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65704 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65704 ']' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.715 17:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.715 [2024-11-26 17:21:43.664774] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:13.715 [2024-11-26 17:21:43.664916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.975 [2024-11-26 17:21:43.848438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.975 [2024-11-26 17:21:43.998099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.234 [2024-11-26 17:21:44.231782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.234 [2024-11-26 17:21:44.231828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.494 [2024-11-26 17:21:44.538221] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:14.494 [2024-11-26 17:21:44.538292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:14.494 [2024-11-26 17:21:44.538305] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:14.494 [2024-11-26 17:21:44.538319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:14.494 [2024-11-26 17:21:44.538326] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:14.494 [2024-11-26 17:21:44.538338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.494 "name": "Existed_Raid", 00:25:14.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.494 "strip_size_kb": 64, 00:25:14.494 "state": "configuring", 00:25:14.494 "raid_level": "concat", 00:25:14.494 "superblock": false, 00:25:14.494 "num_base_bdevs": 3, 00:25:14.494 "num_base_bdevs_discovered": 0, 00:25:14.494 "num_base_bdevs_operational": 3, 00:25:14.494 "base_bdevs_list": [ 00:25:14.494 { 00:25:14.494 "name": "BaseBdev1", 00:25:14.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.494 "is_configured": false, 00:25:14.494 "data_offset": 0, 00:25:14.494 "data_size": 0 00:25:14.494 }, 00:25:14.494 { 00:25:14.494 "name": "BaseBdev2", 00:25:14.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.494 "is_configured": false, 00:25:14.494 "data_offset": 0, 00:25:14.494 "data_size": 0 00:25:14.494 }, 00:25:14.494 { 00:25:14.494 "name": "BaseBdev3", 00:25:14.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.494 "is_configured": false, 00:25:14.494 "data_offset": 0, 00:25:14.494 "data_size": 0 00:25:14.494 } 00:25:14.494 ] 00:25:14.494 }' 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.494 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.060 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:15.060 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.060 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 [2024-11-26 17:21:44.965712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:15.061 [2024-11-26 17:21:44.965757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 [2024-11-26 17:21:44.977703] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:15.061 [2024-11-26 17:21:44.977759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:15.061 [2024-11-26 17:21:44.977771] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.061 [2024-11-26 17:21:44.977784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.061 [2024-11-26 17:21:44.977792] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:15.061 [2024-11-26 17:21:44.977804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.061 17:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 [2024-11-26 17:21:45.028553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.061 BaseBdev1 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 [ 00:25:15.061 { 00:25:15.061 "name": "BaseBdev1", 00:25:15.061 "aliases": [ 00:25:15.061 "681c4c8d-b987-4f63-b37b-f6c1a535be13" 00:25:15.061 ], 00:25:15.061 "product_name": "Malloc disk", 00:25:15.061 "block_size": 512, 00:25:15.061 "num_blocks": 65536, 00:25:15.061 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:15.061 "assigned_rate_limits": { 00:25:15.061 "rw_ios_per_sec": 0, 00:25:15.061 "rw_mbytes_per_sec": 0, 00:25:15.061 "r_mbytes_per_sec": 0, 00:25:15.061 "w_mbytes_per_sec": 0 00:25:15.061 }, 00:25:15.061 "claimed": true, 00:25:15.061 "claim_type": "exclusive_write", 00:25:15.061 "zoned": false, 00:25:15.061 "supported_io_types": { 00:25:15.061 "read": true, 00:25:15.061 "write": true, 00:25:15.061 "unmap": true, 00:25:15.061 "flush": true, 00:25:15.061 "reset": true, 00:25:15.061 "nvme_admin": false, 00:25:15.061 "nvme_io": false, 00:25:15.061 "nvme_io_md": false, 00:25:15.061 "write_zeroes": true, 00:25:15.061 "zcopy": true, 00:25:15.061 "get_zone_info": false, 00:25:15.061 "zone_management": false, 00:25:15.061 "zone_append": false, 00:25:15.061 "compare": false, 00:25:15.061 "compare_and_write": false, 00:25:15.061 "abort": true, 00:25:15.061 "seek_hole": false, 00:25:15.061 "seek_data": false, 00:25:15.061 "copy": true, 00:25:15.061 "nvme_iov_md": false 00:25:15.061 }, 00:25:15.061 "memory_domains": [ 00:25:15.061 { 00:25:15.061 "dma_device_id": "system", 00:25:15.061 "dma_device_type": 1 00:25:15.061 }, 00:25:15.061 { 00:25:15.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.061 "dma_device_type": 2 00:25:15.061 } 00:25:15.061 ], 00:25:15.061 "driver_specific": {} 00:25:15.061 } 00:25:15.061 ] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.061 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.061 "name": "Existed_Raid", 00:25:15.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.061 "strip_size_kb": 64, 00:25:15.061 "state": "configuring", 00:25:15.061 "raid_level": "concat", 00:25:15.061 "superblock": false, 00:25:15.061 "num_base_bdevs": 3, 00:25:15.061 "num_base_bdevs_discovered": 1, 00:25:15.061 "num_base_bdevs_operational": 3, 00:25:15.061 "base_bdevs_list": [ 00:25:15.061 { 00:25:15.061 "name": "BaseBdev1", 00:25:15.061 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:15.061 "is_configured": true, 00:25:15.061 "data_offset": 0, 00:25:15.061 "data_size": 65536 00:25:15.061 }, 00:25:15.061 { 00:25:15.061 "name": "BaseBdev2", 00:25:15.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.061 "is_configured": false, 00:25:15.062 "data_offset": 0, 00:25:15.062 "data_size": 0 00:25:15.062 }, 00:25:15.062 { 00:25:15.062 "name": "BaseBdev3", 00:25:15.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.062 "is_configured": false, 00:25:15.062 "data_offset": 0, 00:25:15.062 "data_size": 0 00:25:15.062 } 00:25:15.062 ] 00:25:15.062 }' 00:25:15.062 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.062 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 [2024-11-26 17:21:45.491959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:15.629 [2024-11-26 17:21:45.492164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 [2024-11-26 17:21:45.503984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.629 [2024-11-26 17:21:45.506348] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.629 [2024-11-26 17:21:45.506403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.629 [2024-11-26 17:21:45.506417] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:15.629 [2024-11-26 17:21:45.506431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.629 "name": "Existed_Raid", 00:25:15.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.629 "strip_size_kb": 64, 00:25:15.629 "state": "configuring", 00:25:15.629 "raid_level": "concat", 00:25:15.629 "superblock": false, 00:25:15.629 "num_base_bdevs": 3, 00:25:15.629 "num_base_bdevs_discovered": 1, 00:25:15.629 "num_base_bdevs_operational": 3, 00:25:15.629 "base_bdevs_list": [ 00:25:15.629 { 00:25:15.629 "name": "BaseBdev1", 00:25:15.629 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:15.629 "is_configured": true, 00:25:15.629 "data_offset": 0, 00:25:15.629 "data_size": 65536 00:25:15.629 }, 00:25:15.629 { 00:25:15.629 "name": "BaseBdev2", 00:25:15.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.629 "is_configured": false, 00:25:15.629 "data_offset": 0, 00:25:15.629 "data_size": 0 00:25:15.629 }, 00:25:15.629 { 00:25:15.629 "name": "BaseBdev3", 00:25:15.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.629 "is_configured": false, 00:25:15.629 "data_offset": 0, 00:25:15.629 "data_size": 0 00:25:15.629 } 00:25:15.629 ] 00:25:15.629 }' 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.629 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.889 [2024-11-26 17:21:45.972084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.889 BaseBdev2 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.889 17:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.889 [ 00:25:15.889 { 00:25:15.889 "name": "BaseBdev2", 00:25:15.889 "aliases": [ 00:25:15.889 "b83e2749-b64e-402d-9e6e-33280da90a2a" 00:25:15.889 ], 00:25:15.889 "product_name": "Malloc disk", 00:25:15.889 "block_size": 512, 00:25:15.889 "num_blocks": 65536, 00:25:15.889 "uuid": "b83e2749-b64e-402d-9e6e-33280da90a2a", 00:25:15.889 "assigned_rate_limits": { 00:25:15.889 "rw_ios_per_sec": 0, 00:25:15.889 "rw_mbytes_per_sec": 0, 00:25:15.889 "r_mbytes_per_sec": 0, 00:25:15.889 "w_mbytes_per_sec": 0 00:25:15.889 }, 00:25:15.889 "claimed": true, 00:25:15.889 "claim_type": "exclusive_write", 00:25:15.889 "zoned": false, 00:25:15.889 "supported_io_types": { 00:25:15.889 "read": true, 00:25:15.889 "write": true, 00:25:16.148 "unmap": true, 00:25:16.148 "flush": true, 00:25:16.148 "reset": true, 00:25:16.148 "nvme_admin": false, 00:25:16.148 "nvme_io": false, 00:25:16.148 "nvme_io_md": false, 00:25:16.148 "write_zeroes": true, 00:25:16.148 "zcopy": true, 00:25:16.148 "get_zone_info": false, 00:25:16.148 "zone_management": false, 00:25:16.148 "zone_append": false, 00:25:16.148 "compare": false, 00:25:16.148 "compare_and_write": false, 00:25:16.148 "abort": true, 00:25:16.148 "seek_hole": false, 00:25:16.148 "seek_data": false, 00:25:16.148 "copy": true, 00:25:16.148 "nvme_iov_md": false 00:25:16.148 }, 00:25:16.148 "memory_domains": [ 00:25:16.148 { 00:25:16.148 "dma_device_id": "system", 00:25:16.148 "dma_device_type": 1 00:25:16.148 }, 00:25:16.148 { 00:25:16.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.148 "dma_device_type": 2 00:25:16.148 } 00:25:16.148 ], 00:25:16.148 "driver_specific": {} 00:25:16.148 } 00:25:16.148 ] 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.148 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.148 "name": "Existed_Raid", 00:25:16.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.149 "strip_size_kb": 64, 00:25:16.149 "state": "configuring", 00:25:16.149 "raid_level": "concat", 00:25:16.149 "superblock": false, 00:25:16.149 "num_base_bdevs": 3, 00:25:16.149 "num_base_bdevs_discovered": 2, 00:25:16.149 "num_base_bdevs_operational": 3, 00:25:16.149 "base_bdevs_list": [ 00:25:16.149 { 00:25:16.149 "name": "BaseBdev1", 00:25:16.149 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:16.149 "is_configured": true, 00:25:16.149 "data_offset": 0, 00:25:16.149 "data_size": 65536 00:25:16.149 }, 00:25:16.149 { 00:25:16.149 "name": "BaseBdev2", 00:25:16.149 "uuid": "b83e2749-b64e-402d-9e6e-33280da90a2a", 00:25:16.149 "is_configured": true, 00:25:16.149 "data_offset": 0, 00:25:16.149 "data_size": 65536 00:25:16.149 }, 00:25:16.149 { 00:25:16.149 "name": "BaseBdev3", 00:25:16.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.149 "is_configured": false, 00:25:16.149 "data_offset": 0, 00:25:16.149 "data_size": 0 00:25:16.149 } 00:25:16.149 ] 00:25:16.149 }' 00:25:16.149 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.149 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.408 [2024-11-26 17:21:46.469829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:16.408 [2024-11-26 17:21:46.469890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:16.408 [2024-11-26 17:21:46.469907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:16.408 [2024-11-26 17:21:46.470221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:16.408 [2024-11-26 17:21:46.470421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:16.408 [2024-11-26 17:21:46.470433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:16.408 [2024-11-26 17:21:46.470785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.408 BaseBdev3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.408 [ 00:25:16.408 { 00:25:16.408 "name": "BaseBdev3", 00:25:16.408 "aliases": [ 00:25:16.408 "49122373-63eb-439a-976c-b6ce5db218f8" 00:25:16.408 ], 00:25:16.408 "product_name": "Malloc disk", 00:25:16.408 "block_size": 512, 00:25:16.408 "num_blocks": 65536, 00:25:16.408 "uuid": "49122373-63eb-439a-976c-b6ce5db218f8", 00:25:16.408 "assigned_rate_limits": { 00:25:16.408 "rw_ios_per_sec": 0, 00:25:16.408 "rw_mbytes_per_sec": 0, 00:25:16.408 "r_mbytes_per_sec": 0, 00:25:16.408 "w_mbytes_per_sec": 0 00:25:16.408 }, 00:25:16.408 "claimed": true, 00:25:16.408 "claim_type": "exclusive_write", 00:25:16.408 "zoned": false, 00:25:16.408 "supported_io_types": { 00:25:16.408 "read": true, 00:25:16.408 "write": true, 00:25:16.408 "unmap": true, 00:25:16.408 "flush": true, 00:25:16.408 "reset": true, 00:25:16.408 "nvme_admin": false, 00:25:16.408 "nvme_io": false, 00:25:16.408 "nvme_io_md": false, 00:25:16.408 "write_zeroes": true, 00:25:16.408 "zcopy": true, 00:25:16.408 "get_zone_info": false, 00:25:16.408 "zone_management": false, 00:25:16.408 "zone_append": false, 00:25:16.408 "compare": false, 00:25:16.408 "compare_and_write": false, 00:25:16.408 "abort": true, 00:25:16.408 "seek_hole": false, 00:25:16.408 "seek_data": false, 00:25:16.408 "copy": true, 00:25:16.408 "nvme_iov_md": false 00:25:16.408 }, 00:25:16.408 "memory_domains": [ 00:25:16.408 { 00:25:16.408 "dma_device_id": "system", 00:25:16.408 "dma_device_type": 1 00:25:16.408 }, 00:25:16.408 { 00:25:16.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.408 "dma_device_type": 2 00:25:16.408 } 00:25:16.408 ], 00:25:16.408 "driver_specific": {} 00:25:16.408 } 00:25:16.408 ] 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.408 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.668 "name": "Existed_Raid", 00:25:16.668 "uuid": "627b8ddc-a70d-406c-b037-cddd423b882a", 00:25:16.668 "strip_size_kb": 64, 00:25:16.668 "state": "online", 00:25:16.668 "raid_level": "concat", 00:25:16.668 "superblock": false, 00:25:16.668 "num_base_bdevs": 3, 00:25:16.668 "num_base_bdevs_discovered": 3, 00:25:16.668 "num_base_bdevs_operational": 3, 00:25:16.668 "base_bdevs_list": [ 00:25:16.668 { 00:25:16.668 "name": "BaseBdev1", 00:25:16.668 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:16.668 "is_configured": true, 00:25:16.668 "data_offset": 0, 00:25:16.668 "data_size": 65536 00:25:16.668 }, 00:25:16.668 { 00:25:16.668 "name": "BaseBdev2", 00:25:16.668 "uuid": "b83e2749-b64e-402d-9e6e-33280da90a2a", 00:25:16.668 "is_configured": true, 00:25:16.668 "data_offset": 0, 00:25:16.668 "data_size": 65536 00:25:16.668 }, 00:25:16.668 { 00:25:16.668 "name": "BaseBdev3", 00:25:16.668 "uuid": "49122373-63eb-439a-976c-b6ce5db218f8", 00:25:16.668 "is_configured": true, 00:25:16.668 "data_offset": 0, 00:25:16.668 "data_size": 65536 00:25:16.668 } 00:25:16.668 ] 00:25:16.668 }' 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.668 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.927 [2024-11-26 17:21:46.961951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.927 17:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:16.927 "name": "Existed_Raid", 00:25:16.927 "aliases": [ 00:25:16.927 "627b8ddc-a70d-406c-b037-cddd423b882a" 00:25:16.927 ], 00:25:16.927 "product_name": "Raid Volume", 00:25:16.927 "block_size": 512, 00:25:16.927 "num_blocks": 196608, 00:25:16.927 "uuid": "627b8ddc-a70d-406c-b037-cddd423b882a", 00:25:16.927 "assigned_rate_limits": { 00:25:16.927 "rw_ios_per_sec": 0, 00:25:16.927 "rw_mbytes_per_sec": 0, 00:25:16.927 "r_mbytes_per_sec": 0, 00:25:16.927 "w_mbytes_per_sec": 0 00:25:16.927 }, 00:25:16.927 "claimed": false, 00:25:16.927 "zoned": false, 00:25:16.927 "supported_io_types": { 00:25:16.927 "read": true, 00:25:16.927 "write": true, 00:25:16.927 "unmap": true, 00:25:16.927 "flush": true, 00:25:16.927 "reset": true, 00:25:16.927 "nvme_admin": false, 00:25:16.927 "nvme_io": false, 00:25:16.927 "nvme_io_md": false, 00:25:16.927 "write_zeroes": true, 00:25:16.927 "zcopy": false, 00:25:16.927 "get_zone_info": false, 00:25:16.927 "zone_management": false, 00:25:16.927 "zone_append": false, 00:25:16.927 "compare": false, 00:25:16.927 "compare_and_write": false, 00:25:16.927 "abort": false, 00:25:16.927 "seek_hole": false, 00:25:16.927 "seek_data": false, 00:25:16.927 "copy": false, 00:25:16.927 "nvme_iov_md": false 00:25:16.927 }, 00:25:16.927 "memory_domains": [ 00:25:16.927 { 00:25:16.927 "dma_device_id": "system", 00:25:16.927 "dma_device_type": 1 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.927 "dma_device_type": 2 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "dma_device_id": "system", 00:25:16.927 "dma_device_type": 1 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.927 "dma_device_type": 2 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "dma_device_id": "system", 00:25:16.927 "dma_device_type": 1 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.927 "dma_device_type": 2 00:25:16.927 } 00:25:16.927 ], 00:25:16.927 "driver_specific": { 00:25:16.927 "raid": { 00:25:16.927 "uuid": "627b8ddc-a70d-406c-b037-cddd423b882a", 00:25:16.927 "strip_size_kb": 64, 00:25:16.927 "state": "online", 00:25:16.927 "raid_level": "concat", 00:25:16.927 "superblock": false, 00:25:16.927 "num_base_bdevs": 3, 00:25:16.927 "num_base_bdevs_discovered": 3, 00:25:16.927 "num_base_bdevs_operational": 3, 00:25:16.927 "base_bdevs_list": [ 00:25:16.927 { 00:25:16.927 "name": "BaseBdev1", 00:25:16.927 "uuid": "681c4c8d-b987-4f63-b37b-f6c1a535be13", 00:25:16.927 "is_configured": true, 00:25:16.927 "data_offset": 0, 00:25:16.927 "data_size": 65536 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "name": "BaseBdev2", 00:25:16.927 "uuid": "b83e2749-b64e-402d-9e6e-33280da90a2a", 00:25:16.927 "is_configured": true, 00:25:16.927 "data_offset": 0, 00:25:16.927 "data_size": 65536 00:25:16.927 }, 00:25:16.927 { 00:25:16.927 "name": "BaseBdev3", 00:25:16.927 "uuid": "49122373-63eb-439a-976c-b6ce5db218f8", 00:25:16.927 "is_configured": true, 00:25:16.927 "data_offset": 0, 00:25:16.927 "data_size": 65536 00:25:16.927 } 00:25:16.927 ] 00:25:16.927 } 00:25:16.927 } 00:25:16.927 }' 00:25:16.927 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:16.927 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:16.927 BaseBdev2 00:25:16.927 BaseBdev3' 00:25:16.927 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:17.186 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.187 [2024-11-26 17:21:47.181584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:17.187 [2024-11-26 17:21:47.181822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:17.187 [2024-11-26 17:21:47.181922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.187 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.445 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.445 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.445 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.445 "name": "Existed_Raid", 00:25:17.445 "uuid": "627b8ddc-a70d-406c-b037-cddd423b882a", 00:25:17.445 "strip_size_kb": 64, 00:25:17.445 "state": "offline", 00:25:17.445 "raid_level": "concat", 00:25:17.445 "superblock": false, 00:25:17.445 "num_base_bdevs": 3, 00:25:17.445 "num_base_bdevs_discovered": 2, 00:25:17.445 "num_base_bdevs_operational": 2, 00:25:17.445 "base_bdevs_list": [ 00:25:17.445 { 00:25:17.445 "name": null, 00:25:17.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.445 "is_configured": false, 00:25:17.445 "data_offset": 0, 00:25:17.445 "data_size": 65536 00:25:17.445 }, 00:25:17.445 { 00:25:17.445 "name": "BaseBdev2", 00:25:17.445 "uuid": "b83e2749-b64e-402d-9e6e-33280da90a2a", 00:25:17.445 "is_configured": true, 00:25:17.445 "data_offset": 0, 00:25:17.445 "data_size": 65536 00:25:17.445 }, 00:25:17.445 { 00:25:17.445 "name": "BaseBdev3", 00:25:17.445 "uuid": "49122373-63eb-439a-976c-b6ce5db218f8", 00:25:17.445 "is_configured": true, 00:25:17.445 "data_offset": 0, 00:25:17.445 "data_size": 65536 00:25:17.445 } 00:25:17.445 ] 00:25:17.445 }' 00:25:17.445 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.445 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.705 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.705 [2024-11-26 17:21:47.741838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.964 [2024-11-26 17:21:47.895051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:17.964 [2024-11-26 17:21:47.895131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:17.964 17:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.964 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.222 BaseBdev2 00:25:18.222 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.222 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:18.222 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 [ 00:25:18.223 { 00:25:18.223 "name": "BaseBdev2", 00:25:18.223 "aliases": [ 00:25:18.223 "91a9ec7d-f223-4f25-a9be-73942b996883" 00:25:18.223 ], 00:25:18.223 "product_name": "Malloc disk", 00:25:18.223 "block_size": 512, 00:25:18.223 "num_blocks": 65536, 00:25:18.223 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:18.223 "assigned_rate_limits": { 00:25:18.223 "rw_ios_per_sec": 0, 00:25:18.223 "rw_mbytes_per_sec": 0, 00:25:18.223 "r_mbytes_per_sec": 0, 00:25:18.223 "w_mbytes_per_sec": 0 00:25:18.223 }, 00:25:18.223 "claimed": false, 00:25:18.223 "zoned": false, 00:25:18.223 "supported_io_types": { 00:25:18.223 "read": true, 00:25:18.223 "write": true, 00:25:18.223 "unmap": true, 00:25:18.223 "flush": true, 00:25:18.223 "reset": true, 00:25:18.223 "nvme_admin": false, 00:25:18.223 "nvme_io": false, 00:25:18.223 "nvme_io_md": false, 00:25:18.223 "write_zeroes": true, 00:25:18.223 "zcopy": true, 00:25:18.223 "get_zone_info": false, 00:25:18.223 "zone_management": false, 00:25:18.223 "zone_append": false, 00:25:18.223 "compare": false, 00:25:18.223 "compare_and_write": false, 00:25:18.223 "abort": true, 00:25:18.223 "seek_hole": false, 00:25:18.223 "seek_data": false, 00:25:18.223 "copy": true, 00:25:18.223 "nvme_iov_md": false 00:25:18.223 }, 00:25:18.223 "memory_domains": [ 00:25:18.223 { 00:25:18.223 "dma_device_id": "system", 00:25:18.223 "dma_device_type": 1 00:25:18.223 }, 00:25:18.223 { 00:25:18.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.223 "dma_device_type": 2 00:25:18.223 } 00:25:18.223 ], 00:25:18.223 "driver_specific": {} 00:25:18.223 } 00:25:18.223 ] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 BaseBdev3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 [ 00:25:18.223 { 00:25:18.223 "name": "BaseBdev3", 00:25:18.223 "aliases": [ 00:25:18.223 "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692" 00:25:18.223 ], 00:25:18.223 "product_name": "Malloc disk", 00:25:18.223 "block_size": 512, 00:25:18.223 "num_blocks": 65536, 00:25:18.223 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:18.223 "assigned_rate_limits": { 00:25:18.223 "rw_ios_per_sec": 0, 00:25:18.223 "rw_mbytes_per_sec": 0, 00:25:18.223 "r_mbytes_per_sec": 0, 00:25:18.223 "w_mbytes_per_sec": 0 00:25:18.223 }, 00:25:18.223 "claimed": false, 00:25:18.223 "zoned": false, 00:25:18.223 "supported_io_types": { 00:25:18.223 "read": true, 00:25:18.223 "write": true, 00:25:18.223 "unmap": true, 00:25:18.223 "flush": true, 00:25:18.223 "reset": true, 00:25:18.223 "nvme_admin": false, 00:25:18.223 "nvme_io": false, 00:25:18.223 "nvme_io_md": false, 00:25:18.223 "write_zeroes": true, 00:25:18.223 "zcopy": true, 00:25:18.223 "get_zone_info": false, 00:25:18.223 "zone_management": false, 00:25:18.223 "zone_append": false, 00:25:18.223 "compare": false, 00:25:18.223 "compare_and_write": false, 00:25:18.223 "abort": true, 00:25:18.223 "seek_hole": false, 00:25:18.223 "seek_data": false, 00:25:18.223 "copy": true, 00:25:18.223 "nvme_iov_md": false 00:25:18.223 }, 00:25:18.223 "memory_domains": [ 00:25:18.223 { 00:25:18.223 "dma_device_id": "system", 00:25:18.223 "dma_device_type": 1 00:25:18.223 }, 00:25:18.223 { 00:25:18.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.223 "dma_device_type": 2 00:25:18.223 } 00:25:18.223 ], 00:25:18.223 "driver_specific": {} 00:25:18.223 } 00:25:18.223 ] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 [2024-11-26 17:21:48.209304] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:18.223 [2024-11-26 17:21:48.209589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:18.223 [2024-11-26 17:21:48.209713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.223 [2024-11-26 17:21:48.212429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.223 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.223 "name": "Existed_Raid", 00:25:18.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.223 "strip_size_kb": 64, 00:25:18.223 "state": "configuring", 00:25:18.223 "raid_level": "concat", 00:25:18.223 "superblock": false, 00:25:18.223 "num_base_bdevs": 3, 00:25:18.223 "num_base_bdevs_discovered": 2, 00:25:18.223 "num_base_bdevs_operational": 3, 00:25:18.223 "base_bdevs_list": [ 00:25:18.223 { 00:25:18.223 "name": "BaseBdev1", 00:25:18.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.223 "is_configured": false, 00:25:18.223 "data_offset": 0, 00:25:18.224 "data_size": 0 00:25:18.224 }, 00:25:18.224 { 00:25:18.224 "name": "BaseBdev2", 00:25:18.224 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 }, 00:25:18.224 { 00:25:18.224 "name": "BaseBdev3", 00:25:18.224 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 } 00:25:18.224 ] 00:25:18.224 }' 00:25:18.224 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.224 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.792 [2024-11-26 17:21:48.656745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.792 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.793 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.793 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.793 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.793 "name": "Existed_Raid", 00:25:18.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.793 "strip_size_kb": 64, 00:25:18.793 "state": "configuring", 00:25:18.793 "raid_level": "concat", 00:25:18.793 "superblock": false, 00:25:18.793 "num_base_bdevs": 3, 00:25:18.793 "num_base_bdevs_discovered": 1, 00:25:18.793 "num_base_bdevs_operational": 3, 00:25:18.793 "base_bdevs_list": [ 00:25:18.793 { 00:25:18.793 "name": "BaseBdev1", 00:25:18.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.793 "is_configured": false, 00:25:18.793 "data_offset": 0, 00:25:18.793 "data_size": 0 00:25:18.793 }, 00:25:18.793 { 00:25:18.793 "name": null, 00:25:18.793 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:18.793 "is_configured": false, 00:25:18.793 "data_offset": 0, 00:25:18.793 "data_size": 65536 00:25:18.793 }, 00:25:18.793 { 00:25:18.793 "name": "BaseBdev3", 00:25:18.793 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:18.793 "is_configured": true, 00:25:18.793 "data_offset": 0, 00:25:18.793 "data_size": 65536 00:25:18.793 } 00:25:18.793 ] 00:25:18.793 }' 00:25:18.793 17:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.793 17:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.052 [2024-11-26 17:21:49.119790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:19.052 BaseBdev1 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.052 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.052 [ 00:25:19.052 { 00:25:19.052 "name": "BaseBdev1", 00:25:19.052 "aliases": [ 00:25:19.052 "c234eaac-66e6-4125-a899-88e636d84698" 00:25:19.052 ], 00:25:19.052 "product_name": "Malloc disk", 00:25:19.052 "block_size": 512, 00:25:19.053 "num_blocks": 65536, 00:25:19.053 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:19.053 "assigned_rate_limits": { 00:25:19.053 "rw_ios_per_sec": 0, 00:25:19.053 "rw_mbytes_per_sec": 0, 00:25:19.053 "r_mbytes_per_sec": 0, 00:25:19.053 "w_mbytes_per_sec": 0 00:25:19.053 }, 00:25:19.053 "claimed": true, 00:25:19.053 "claim_type": "exclusive_write", 00:25:19.053 "zoned": false, 00:25:19.053 "supported_io_types": { 00:25:19.053 "read": true, 00:25:19.053 "write": true, 00:25:19.053 "unmap": true, 00:25:19.053 "flush": true, 00:25:19.053 "reset": true, 00:25:19.053 "nvme_admin": false, 00:25:19.053 "nvme_io": false, 00:25:19.053 "nvme_io_md": false, 00:25:19.053 "write_zeroes": true, 00:25:19.053 "zcopy": true, 00:25:19.053 "get_zone_info": false, 00:25:19.053 "zone_management": false, 00:25:19.053 "zone_append": false, 00:25:19.053 "compare": false, 00:25:19.053 "compare_and_write": false, 00:25:19.053 "abort": true, 00:25:19.053 "seek_hole": false, 00:25:19.053 "seek_data": false, 00:25:19.053 "copy": true, 00:25:19.053 "nvme_iov_md": false 00:25:19.053 }, 00:25:19.053 "memory_domains": [ 00:25:19.053 { 00:25:19.053 "dma_device_id": "system", 00:25:19.053 "dma_device_type": 1 00:25:19.053 }, 00:25:19.053 { 00:25:19.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.053 "dma_device_type": 2 00:25:19.053 } 00:25:19.053 ], 00:25:19.053 "driver_specific": {} 00:25:19.053 } 00:25:19.053 ] 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.053 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.311 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.311 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.311 "name": "Existed_Raid", 00:25:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.311 "strip_size_kb": 64, 00:25:19.311 "state": "configuring", 00:25:19.311 "raid_level": "concat", 00:25:19.311 "superblock": false, 00:25:19.311 "num_base_bdevs": 3, 00:25:19.311 "num_base_bdevs_discovered": 2, 00:25:19.311 "num_base_bdevs_operational": 3, 00:25:19.311 "base_bdevs_list": [ 00:25:19.311 { 00:25:19.311 "name": "BaseBdev1", 00:25:19.311 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:19.311 "is_configured": true, 00:25:19.311 "data_offset": 0, 00:25:19.311 "data_size": 65536 00:25:19.311 }, 00:25:19.311 { 00:25:19.311 "name": null, 00:25:19.311 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:19.311 "is_configured": false, 00:25:19.311 "data_offset": 0, 00:25:19.311 "data_size": 65536 00:25:19.311 }, 00:25:19.311 { 00:25:19.311 "name": "BaseBdev3", 00:25:19.311 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:19.311 "is_configured": true, 00:25:19.311 "data_offset": 0, 00:25:19.311 "data_size": 65536 00:25:19.311 } 00:25:19.311 ] 00:25:19.311 }' 00:25:19.311 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.311 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.568 [2024-11-26 17:21:49.659155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.568 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.827 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.827 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.827 "name": "Existed_Raid", 00:25:19.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.827 "strip_size_kb": 64, 00:25:19.827 "state": "configuring", 00:25:19.827 "raid_level": "concat", 00:25:19.827 "superblock": false, 00:25:19.827 "num_base_bdevs": 3, 00:25:19.827 "num_base_bdevs_discovered": 1, 00:25:19.827 "num_base_bdevs_operational": 3, 00:25:19.827 "base_bdevs_list": [ 00:25:19.827 { 00:25:19.827 "name": "BaseBdev1", 00:25:19.827 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:19.827 "is_configured": true, 00:25:19.827 "data_offset": 0, 00:25:19.827 "data_size": 65536 00:25:19.827 }, 00:25:19.827 { 00:25:19.827 "name": null, 00:25:19.827 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:19.827 "is_configured": false, 00:25:19.827 "data_offset": 0, 00:25:19.827 "data_size": 65536 00:25:19.827 }, 00:25:19.827 { 00:25:19.827 "name": null, 00:25:19.827 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:19.827 "is_configured": false, 00:25:19.827 "data_offset": 0, 00:25:19.827 "data_size": 65536 00:25:19.827 } 00:25:19.827 ] 00:25:19.827 }' 00:25:19.827 17:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.827 17:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.085 [2024-11-26 17:21:50.098613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.085 "name": "Existed_Raid", 00:25:20.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.085 "strip_size_kb": 64, 00:25:20.085 "state": "configuring", 00:25:20.085 "raid_level": "concat", 00:25:20.085 "superblock": false, 00:25:20.085 "num_base_bdevs": 3, 00:25:20.085 "num_base_bdevs_discovered": 2, 00:25:20.085 "num_base_bdevs_operational": 3, 00:25:20.085 "base_bdevs_list": [ 00:25:20.085 { 00:25:20.085 "name": "BaseBdev1", 00:25:20.085 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:20.085 "is_configured": true, 00:25:20.085 "data_offset": 0, 00:25:20.085 "data_size": 65536 00:25:20.085 }, 00:25:20.085 { 00:25:20.085 "name": null, 00:25:20.085 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:20.085 "is_configured": false, 00:25:20.085 "data_offset": 0, 00:25:20.085 "data_size": 65536 00:25:20.085 }, 00:25:20.085 { 00:25:20.085 "name": "BaseBdev3", 00:25:20.085 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:20.085 "is_configured": true, 00:25:20.085 "data_offset": 0, 00:25:20.085 "data_size": 65536 00:25:20.085 } 00:25:20.085 ] 00:25:20.085 }' 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.085 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 [2024-11-26 17:21:50.569905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.657 "name": "Existed_Raid", 00:25:20.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.657 "strip_size_kb": 64, 00:25:20.657 "state": "configuring", 00:25:20.657 "raid_level": "concat", 00:25:20.657 "superblock": false, 00:25:20.657 "num_base_bdevs": 3, 00:25:20.657 "num_base_bdevs_discovered": 1, 00:25:20.657 "num_base_bdevs_operational": 3, 00:25:20.657 "base_bdevs_list": [ 00:25:20.657 { 00:25:20.657 "name": null, 00:25:20.657 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:20.657 "is_configured": false, 00:25:20.657 "data_offset": 0, 00:25:20.657 "data_size": 65536 00:25:20.657 }, 00:25:20.657 { 00:25:20.657 "name": null, 00:25:20.657 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:20.657 "is_configured": false, 00:25:20.657 "data_offset": 0, 00:25:20.657 "data_size": 65536 00:25:20.657 }, 00:25:20.657 { 00:25:20.657 "name": "BaseBdev3", 00:25:20.657 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:20.657 "is_configured": true, 00:25:20.657 "data_offset": 0, 00:25:20.657 "data_size": 65536 00:25:20.657 } 00:25:20.657 ] 00:25:20.657 }' 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.657 17:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:21.220 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.221 [2024-11-26 17:21:51.117652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.221 "name": "Existed_Raid", 00:25:21.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.221 "strip_size_kb": 64, 00:25:21.221 "state": "configuring", 00:25:21.221 "raid_level": "concat", 00:25:21.221 "superblock": false, 00:25:21.221 "num_base_bdevs": 3, 00:25:21.221 "num_base_bdevs_discovered": 2, 00:25:21.221 "num_base_bdevs_operational": 3, 00:25:21.221 "base_bdevs_list": [ 00:25:21.221 { 00:25:21.221 "name": null, 00:25:21.221 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:21.221 "is_configured": false, 00:25:21.221 "data_offset": 0, 00:25:21.221 "data_size": 65536 00:25:21.221 }, 00:25:21.221 { 00:25:21.221 "name": "BaseBdev2", 00:25:21.221 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:21.221 "is_configured": true, 00:25:21.221 "data_offset": 0, 00:25:21.221 "data_size": 65536 00:25:21.221 }, 00:25:21.221 { 00:25:21.221 "name": "BaseBdev3", 00:25:21.221 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:21.221 "is_configured": true, 00:25:21.221 "data_offset": 0, 00:25:21.221 "data_size": 65536 00:25:21.221 } 00:25:21.221 ] 00:25:21.221 }' 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.221 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.476 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c234eaac-66e6-4125-a899-88e636d84698 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.733 [2024-11-26 17:21:51.670755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:21.733 [2024-11-26 17:21:51.670812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:21.733 [2024-11-26 17:21:51.670824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:21.733 [2024-11-26 17:21:51.671112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:21.733 [2024-11-26 17:21:51.671271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:21.733 [2024-11-26 17:21:51.671282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:21.733 [2024-11-26 17:21:51.671566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.733 NewBaseBdev 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.733 [ 00:25:21.733 { 00:25:21.733 "name": "NewBaseBdev", 00:25:21.733 "aliases": [ 00:25:21.733 "c234eaac-66e6-4125-a899-88e636d84698" 00:25:21.733 ], 00:25:21.733 "product_name": "Malloc disk", 00:25:21.733 "block_size": 512, 00:25:21.733 "num_blocks": 65536, 00:25:21.733 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:21.733 "assigned_rate_limits": { 00:25:21.733 "rw_ios_per_sec": 0, 00:25:21.733 "rw_mbytes_per_sec": 0, 00:25:21.733 "r_mbytes_per_sec": 0, 00:25:21.733 "w_mbytes_per_sec": 0 00:25:21.733 }, 00:25:21.733 "claimed": true, 00:25:21.733 "claim_type": "exclusive_write", 00:25:21.733 "zoned": false, 00:25:21.733 "supported_io_types": { 00:25:21.733 "read": true, 00:25:21.733 "write": true, 00:25:21.733 "unmap": true, 00:25:21.733 "flush": true, 00:25:21.733 "reset": true, 00:25:21.733 "nvme_admin": false, 00:25:21.733 "nvme_io": false, 00:25:21.733 "nvme_io_md": false, 00:25:21.733 "write_zeroes": true, 00:25:21.733 "zcopy": true, 00:25:21.733 "get_zone_info": false, 00:25:21.733 "zone_management": false, 00:25:21.733 "zone_append": false, 00:25:21.733 "compare": false, 00:25:21.733 "compare_and_write": false, 00:25:21.733 "abort": true, 00:25:21.733 "seek_hole": false, 00:25:21.733 "seek_data": false, 00:25:21.733 "copy": true, 00:25:21.733 "nvme_iov_md": false 00:25:21.733 }, 00:25:21.733 "memory_domains": [ 00:25:21.733 { 00:25:21.733 "dma_device_id": "system", 00:25:21.733 "dma_device_type": 1 00:25:21.733 }, 00:25:21.733 { 00:25:21.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.733 "dma_device_type": 2 00:25:21.733 } 00:25:21.733 ], 00:25:21.733 "driver_specific": {} 00:25:21.733 } 00:25:21.733 ] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.733 "name": "Existed_Raid", 00:25:21.733 "uuid": "59e6e5cc-8b14-4f75-8c5c-af901d4dc475", 00:25:21.733 "strip_size_kb": 64, 00:25:21.733 "state": "online", 00:25:21.733 "raid_level": "concat", 00:25:21.733 "superblock": false, 00:25:21.733 "num_base_bdevs": 3, 00:25:21.733 "num_base_bdevs_discovered": 3, 00:25:21.733 "num_base_bdevs_operational": 3, 00:25:21.733 "base_bdevs_list": [ 00:25:21.733 { 00:25:21.733 "name": "NewBaseBdev", 00:25:21.733 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:21.733 "is_configured": true, 00:25:21.733 "data_offset": 0, 00:25:21.733 "data_size": 65536 00:25:21.733 }, 00:25:21.733 { 00:25:21.733 "name": "BaseBdev2", 00:25:21.733 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:21.733 "is_configured": true, 00:25:21.733 "data_offset": 0, 00:25:21.733 "data_size": 65536 00:25:21.733 }, 00:25:21.733 { 00:25:21.733 "name": "BaseBdev3", 00:25:21.733 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:21.733 "is_configured": true, 00:25:21.733 "data_offset": 0, 00:25:21.733 "data_size": 65536 00:25:21.733 } 00:25:21.733 ] 00:25:21.733 }' 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.733 17:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:22.298 [2024-11-26 17:21:52.134732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.298 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:22.298 "name": "Existed_Raid", 00:25:22.298 "aliases": [ 00:25:22.298 "59e6e5cc-8b14-4f75-8c5c-af901d4dc475" 00:25:22.298 ], 00:25:22.298 "product_name": "Raid Volume", 00:25:22.298 "block_size": 512, 00:25:22.298 "num_blocks": 196608, 00:25:22.298 "uuid": "59e6e5cc-8b14-4f75-8c5c-af901d4dc475", 00:25:22.298 "assigned_rate_limits": { 00:25:22.298 "rw_ios_per_sec": 0, 00:25:22.298 "rw_mbytes_per_sec": 0, 00:25:22.298 "r_mbytes_per_sec": 0, 00:25:22.298 "w_mbytes_per_sec": 0 00:25:22.299 }, 00:25:22.299 "claimed": false, 00:25:22.299 "zoned": false, 00:25:22.299 "supported_io_types": { 00:25:22.299 "read": true, 00:25:22.299 "write": true, 00:25:22.299 "unmap": true, 00:25:22.299 "flush": true, 00:25:22.299 "reset": true, 00:25:22.299 "nvme_admin": false, 00:25:22.299 "nvme_io": false, 00:25:22.299 "nvme_io_md": false, 00:25:22.299 "write_zeroes": true, 00:25:22.299 "zcopy": false, 00:25:22.299 "get_zone_info": false, 00:25:22.299 "zone_management": false, 00:25:22.299 "zone_append": false, 00:25:22.299 "compare": false, 00:25:22.299 "compare_and_write": false, 00:25:22.299 "abort": false, 00:25:22.299 "seek_hole": false, 00:25:22.299 "seek_data": false, 00:25:22.299 "copy": false, 00:25:22.299 "nvme_iov_md": false 00:25:22.299 }, 00:25:22.299 "memory_domains": [ 00:25:22.299 { 00:25:22.299 "dma_device_id": "system", 00:25:22.299 "dma_device_type": 1 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.299 "dma_device_type": 2 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "dma_device_id": "system", 00:25:22.299 "dma_device_type": 1 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.299 "dma_device_type": 2 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "dma_device_id": "system", 00:25:22.299 "dma_device_type": 1 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.299 "dma_device_type": 2 00:25:22.299 } 00:25:22.299 ], 00:25:22.299 "driver_specific": { 00:25:22.299 "raid": { 00:25:22.299 "uuid": "59e6e5cc-8b14-4f75-8c5c-af901d4dc475", 00:25:22.299 "strip_size_kb": 64, 00:25:22.299 "state": "online", 00:25:22.299 "raid_level": "concat", 00:25:22.299 "superblock": false, 00:25:22.299 "num_base_bdevs": 3, 00:25:22.299 "num_base_bdevs_discovered": 3, 00:25:22.299 "num_base_bdevs_operational": 3, 00:25:22.299 "base_bdevs_list": [ 00:25:22.299 { 00:25:22.299 "name": "NewBaseBdev", 00:25:22.299 "uuid": "c234eaac-66e6-4125-a899-88e636d84698", 00:25:22.299 "is_configured": true, 00:25:22.299 "data_offset": 0, 00:25:22.299 "data_size": 65536 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "name": "BaseBdev2", 00:25:22.299 "uuid": "91a9ec7d-f223-4f25-a9be-73942b996883", 00:25:22.299 "is_configured": true, 00:25:22.299 "data_offset": 0, 00:25:22.299 "data_size": 65536 00:25:22.299 }, 00:25:22.299 { 00:25:22.299 "name": "BaseBdev3", 00:25:22.299 "uuid": "e850990d-6c9d-4ab9-b4d7-d29ec6c2e692", 00:25:22.299 "is_configured": true, 00:25:22.299 "data_offset": 0, 00:25:22.299 "data_size": 65536 00:25:22.299 } 00:25:22.299 ] 00:25:22.299 } 00:25:22.299 } 00:25:22.299 }' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:22.299 BaseBdev2 00:25:22.299 BaseBdev3' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.299 [2024-11-26 17:21:52.390025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:22.299 [2024-11-26 17:21:52.390075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.299 [2024-11-26 17:21:52.390167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.299 [2024-11-26 17:21:52.390228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.299 [2024-11-26 17:21:52.390243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65704 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65704 ']' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65704 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.299 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65704 00:25:22.563 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.563 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.563 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65704' 00:25:22.563 killing process with pid 65704 00:25:22.563 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65704 00:25:22.563 [2024-11-26 17:21:52.444125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.563 17:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65704 00:25:22.858 [2024-11-26 17:21:52.772168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:24.232 00:25:24.232 real 0m10.453s 00:25:24.232 user 0m16.275s 00:25:24.232 sys 0m2.186s 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.232 ************************************ 00:25:24.232 END TEST raid_state_function_test 00:25:24.232 ************************************ 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.232 17:21:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:25:24.232 17:21:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:24.232 17:21:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.232 17:21:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:24.232 ************************************ 00:25:24.232 START TEST raid_state_function_test_sb 00:25:24.232 ************************************ 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66325 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66325' 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:24.232 Process raid pid: 66325 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66325 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66325 ']' 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.232 17:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.232 [2024-11-26 17:21:54.208989] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:24.232 [2024-11-26 17:21:54.209447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.491 [2024-11-26 17:21:54.397056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.491 [2024-11-26 17:21:54.548451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.749 [2024-11-26 17:21:54.776331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:24.749 [2024-11-26 17:21:54.776392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.008 [2024-11-26 17:21:55.036397] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:25.008 [2024-11-26 17:21:55.036480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:25.008 [2024-11-26 17:21:55.036493] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:25.008 [2024-11-26 17:21:55.036507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:25.008 [2024-11-26 17:21:55.036529] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:25.008 [2024-11-26 17:21:55.036542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.008 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.008 "name": "Existed_Raid", 00:25:25.008 "uuid": "760f3986-11f0-4985-ac54-90c8cbeee857", 00:25:25.008 "strip_size_kb": 64, 00:25:25.008 "state": "configuring", 00:25:25.008 "raid_level": "concat", 00:25:25.008 "superblock": true, 00:25:25.008 "num_base_bdevs": 3, 00:25:25.008 "num_base_bdevs_discovered": 0, 00:25:25.008 "num_base_bdevs_operational": 3, 00:25:25.008 "base_bdevs_list": [ 00:25:25.008 { 00:25:25.009 "name": "BaseBdev1", 00:25:25.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.009 "is_configured": false, 00:25:25.009 "data_offset": 0, 00:25:25.009 "data_size": 0 00:25:25.009 }, 00:25:25.009 { 00:25:25.009 "name": "BaseBdev2", 00:25:25.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.009 "is_configured": false, 00:25:25.009 "data_offset": 0, 00:25:25.009 "data_size": 0 00:25:25.009 }, 00:25:25.009 { 00:25:25.009 "name": "BaseBdev3", 00:25:25.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.009 "is_configured": false, 00:25:25.009 "data_offset": 0, 00:25:25.009 "data_size": 0 00:25:25.009 } 00:25:25.009 ] 00:25:25.009 }' 00:25:25.009 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.009 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.577 [2024-11-26 17:21:55.483692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:25.577 [2024-11-26 17:21:55.483743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.577 [2024-11-26 17:21:55.495702] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:25.577 [2024-11-26 17:21:55.495974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:25.577 [2024-11-26 17:21:55.496005] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:25.577 [2024-11-26 17:21:55.496024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:25.577 [2024-11-26 17:21:55.496035] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:25.577 [2024-11-26 17:21:55.496051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.577 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 [2024-11-26 17:21:55.549098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.578 BaseBdev1 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 [ 00:25:25.578 { 00:25:25.578 "name": "BaseBdev1", 00:25:25.578 "aliases": [ 00:25:25.578 "8dc1216b-b1a4-4c14-8b92-1c512ea78960" 00:25:25.578 ], 00:25:25.578 "product_name": "Malloc disk", 00:25:25.578 "block_size": 512, 00:25:25.578 "num_blocks": 65536, 00:25:25.578 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:25.578 "assigned_rate_limits": { 00:25:25.578 "rw_ios_per_sec": 0, 00:25:25.578 "rw_mbytes_per_sec": 0, 00:25:25.578 "r_mbytes_per_sec": 0, 00:25:25.578 "w_mbytes_per_sec": 0 00:25:25.578 }, 00:25:25.578 "claimed": true, 00:25:25.578 "claim_type": "exclusive_write", 00:25:25.578 "zoned": false, 00:25:25.578 "supported_io_types": { 00:25:25.578 "read": true, 00:25:25.578 "write": true, 00:25:25.578 "unmap": true, 00:25:25.578 "flush": true, 00:25:25.578 "reset": true, 00:25:25.578 "nvme_admin": false, 00:25:25.578 "nvme_io": false, 00:25:25.578 "nvme_io_md": false, 00:25:25.578 "write_zeroes": true, 00:25:25.578 "zcopy": true, 00:25:25.578 "get_zone_info": false, 00:25:25.578 "zone_management": false, 00:25:25.578 "zone_append": false, 00:25:25.578 "compare": false, 00:25:25.578 "compare_and_write": false, 00:25:25.578 "abort": true, 00:25:25.578 "seek_hole": false, 00:25:25.578 "seek_data": false, 00:25:25.578 "copy": true, 00:25:25.578 "nvme_iov_md": false 00:25:25.578 }, 00:25:25.578 "memory_domains": [ 00:25:25.578 { 00:25:25.578 "dma_device_id": "system", 00:25:25.578 "dma_device_type": 1 00:25:25.578 }, 00:25:25.578 { 00:25:25.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.578 "dma_device_type": 2 00:25:25.578 } 00:25:25.578 ], 00:25:25.578 "driver_specific": {} 00:25:25.578 } 00:25:25.578 ] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.578 "name": "Existed_Raid", 00:25:25.578 "uuid": "4210aaa6-0921-4018-84a7-80b434a6e89e", 00:25:25.578 "strip_size_kb": 64, 00:25:25.578 "state": "configuring", 00:25:25.578 "raid_level": "concat", 00:25:25.578 "superblock": true, 00:25:25.578 "num_base_bdevs": 3, 00:25:25.578 "num_base_bdevs_discovered": 1, 00:25:25.578 "num_base_bdevs_operational": 3, 00:25:25.578 "base_bdevs_list": [ 00:25:25.578 { 00:25:25.578 "name": "BaseBdev1", 00:25:25.578 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:25.578 "is_configured": true, 00:25:25.578 "data_offset": 2048, 00:25:25.578 "data_size": 63488 00:25:25.578 }, 00:25:25.578 { 00:25:25.578 "name": "BaseBdev2", 00:25:25.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.578 "is_configured": false, 00:25:25.578 "data_offset": 0, 00:25:25.578 "data_size": 0 00:25:25.578 }, 00:25:25.578 { 00:25:25.578 "name": "BaseBdev3", 00:25:25.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.578 "is_configured": false, 00:25:25.578 "data_offset": 0, 00:25:25.578 "data_size": 0 00:25:25.578 } 00:25:25.578 ] 00:25:25.578 }' 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.578 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.149 [2024-11-26 17:21:55.980688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:26.149 [2024-11-26 17:21:55.980760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.149 [2024-11-26 17:21:55.992769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.149 [2024-11-26 17:21:55.995298] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:26.149 [2024-11-26 17:21:55.995479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:26.149 [2024-11-26 17:21:55.995596] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:26.149 [2024-11-26 17:21:55.995647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:26.149 17:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.149 "name": "Existed_Raid", 00:25:26.149 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:26.149 "strip_size_kb": 64, 00:25:26.149 "state": "configuring", 00:25:26.149 "raid_level": "concat", 00:25:26.149 "superblock": true, 00:25:26.149 "num_base_bdevs": 3, 00:25:26.149 "num_base_bdevs_discovered": 1, 00:25:26.149 "num_base_bdevs_operational": 3, 00:25:26.149 "base_bdevs_list": [ 00:25:26.149 { 00:25:26.149 "name": "BaseBdev1", 00:25:26.149 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:26.149 "is_configured": true, 00:25:26.149 "data_offset": 2048, 00:25:26.149 "data_size": 63488 00:25:26.149 }, 00:25:26.149 { 00:25:26.149 "name": "BaseBdev2", 00:25:26.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.149 "is_configured": false, 00:25:26.149 "data_offset": 0, 00:25:26.149 "data_size": 0 00:25:26.149 }, 00:25:26.149 { 00:25:26.149 "name": "BaseBdev3", 00:25:26.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.149 "is_configured": false, 00:25:26.149 "data_offset": 0, 00:25:26.149 "data_size": 0 00:25:26.149 } 00:25:26.149 ] 00:25:26.149 }' 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.149 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.409 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:26.409 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.409 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.409 [2024-11-26 17:21:56.439844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.409 BaseBdev2 00:25:26.409 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.410 [ 00:25:26.410 { 00:25:26.410 "name": "BaseBdev2", 00:25:26.410 "aliases": [ 00:25:26.410 "181795fb-d8df-454a-902e-e3f57a2f7869" 00:25:26.410 ], 00:25:26.410 "product_name": "Malloc disk", 00:25:26.410 "block_size": 512, 00:25:26.410 "num_blocks": 65536, 00:25:26.410 "uuid": "181795fb-d8df-454a-902e-e3f57a2f7869", 00:25:26.410 "assigned_rate_limits": { 00:25:26.410 "rw_ios_per_sec": 0, 00:25:26.410 "rw_mbytes_per_sec": 0, 00:25:26.410 "r_mbytes_per_sec": 0, 00:25:26.410 "w_mbytes_per_sec": 0 00:25:26.410 }, 00:25:26.410 "claimed": true, 00:25:26.410 "claim_type": "exclusive_write", 00:25:26.410 "zoned": false, 00:25:26.410 "supported_io_types": { 00:25:26.410 "read": true, 00:25:26.410 "write": true, 00:25:26.410 "unmap": true, 00:25:26.410 "flush": true, 00:25:26.410 "reset": true, 00:25:26.410 "nvme_admin": false, 00:25:26.410 "nvme_io": false, 00:25:26.410 "nvme_io_md": false, 00:25:26.410 "write_zeroes": true, 00:25:26.410 "zcopy": true, 00:25:26.410 "get_zone_info": false, 00:25:26.410 "zone_management": false, 00:25:26.410 "zone_append": false, 00:25:26.410 "compare": false, 00:25:26.410 "compare_and_write": false, 00:25:26.410 "abort": true, 00:25:26.410 "seek_hole": false, 00:25:26.410 "seek_data": false, 00:25:26.410 "copy": true, 00:25:26.410 "nvme_iov_md": false 00:25:26.410 }, 00:25:26.410 "memory_domains": [ 00:25:26.410 { 00:25:26.410 "dma_device_id": "system", 00:25:26.410 "dma_device_type": 1 00:25:26.410 }, 00:25:26.410 { 00:25:26.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.410 "dma_device_type": 2 00:25:26.410 } 00:25:26.410 ], 00:25:26.410 "driver_specific": {} 00:25:26.410 } 00:25:26.410 ] 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.410 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.670 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.670 "name": "Existed_Raid", 00:25:26.670 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:26.670 "strip_size_kb": 64, 00:25:26.670 "state": "configuring", 00:25:26.670 "raid_level": "concat", 00:25:26.670 "superblock": true, 00:25:26.670 "num_base_bdevs": 3, 00:25:26.670 "num_base_bdevs_discovered": 2, 00:25:26.670 "num_base_bdevs_operational": 3, 00:25:26.670 "base_bdevs_list": [ 00:25:26.670 { 00:25:26.670 "name": "BaseBdev1", 00:25:26.670 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:26.670 "is_configured": true, 00:25:26.670 "data_offset": 2048, 00:25:26.670 "data_size": 63488 00:25:26.670 }, 00:25:26.670 { 00:25:26.670 "name": "BaseBdev2", 00:25:26.670 "uuid": "181795fb-d8df-454a-902e-e3f57a2f7869", 00:25:26.670 "is_configured": true, 00:25:26.670 "data_offset": 2048, 00:25:26.670 "data_size": 63488 00:25:26.670 }, 00:25:26.670 { 00:25:26.670 "name": "BaseBdev3", 00:25:26.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.670 "is_configured": false, 00:25:26.670 "data_offset": 0, 00:25:26.670 "data_size": 0 00:25:26.670 } 00:25:26.670 ] 00:25:26.670 }' 00:25:26.670 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.670 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.983 [2024-11-26 17:21:56.953494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.983 [2024-11-26 17:21:56.953843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:26.983 [2024-11-26 17:21:56.953869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:26.983 [2024-11-26 17:21:56.954193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:26.983 BaseBdev3 00:25:26.983 [2024-11-26 17:21:56.954367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:26.983 [2024-11-26 17:21:56.954384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:26.983 [2024-11-26 17:21:56.954560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.983 17:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.983 [ 00:25:26.983 { 00:25:26.983 "name": "BaseBdev3", 00:25:26.983 "aliases": [ 00:25:26.983 "7df86e6d-53b3-474a-ae26-b20ad491d95e" 00:25:26.983 ], 00:25:26.983 "product_name": "Malloc disk", 00:25:26.983 "block_size": 512, 00:25:26.983 "num_blocks": 65536, 00:25:26.983 "uuid": "7df86e6d-53b3-474a-ae26-b20ad491d95e", 00:25:26.983 "assigned_rate_limits": { 00:25:26.983 "rw_ios_per_sec": 0, 00:25:26.983 "rw_mbytes_per_sec": 0, 00:25:26.983 "r_mbytes_per_sec": 0, 00:25:26.983 "w_mbytes_per_sec": 0 00:25:26.983 }, 00:25:26.983 "claimed": true, 00:25:26.984 "claim_type": "exclusive_write", 00:25:26.984 "zoned": false, 00:25:26.984 "supported_io_types": { 00:25:26.984 "read": true, 00:25:26.984 "write": true, 00:25:26.984 "unmap": true, 00:25:26.984 "flush": true, 00:25:26.984 "reset": true, 00:25:26.984 "nvme_admin": false, 00:25:26.984 "nvme_io": false, 00:25:26.984 "nvme_io_md": false, 00:25:26.984 "write_zeroes": true, 00:25:26.984 "zcopy": true, 00:25:26.984 "get_zone_info": false, 00:25:26.984 "zone_management": false, 00:25:26.984 "zone_append": false, 00:25:26.984 "compare": false, 00:25:26.984 "compare_and_write": false, 00:25:26.984 "abort": true, 00:25:26.984 "seek_hole": false, 00:25:26.984 "seek_data": false, 00:25:26.984 "copy": true, 00:25:26.984 "nvme_iov_md": false 00:25:26.984 }, 00:25:26.984 "memory_domains": [ 00:25:26.984 { 00:25:26.984 "dma_device_id": "system", 00:25:26.984 "dma_device_type": 1 00:25:26.984 }, 00:25:26.984 { 00:25:26.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.984 "dma_device_type": 2 00:25:26.984 } 00:25:26.984 ], 00:25:26.984 "driver_specific": {} 00:25:26.984 } 00:25:26.984 ] 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.984 "name": "Existed_Raid", 00:25:26.984 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:26.984 "strip_size_kb": 64, 00:25:26.984 "state": "online", 00:25:26.984 "raid_level": "concat", 00:25:26.984 "superblock": true, 00:25:26.984 "num_base_bdevs": 3, 00:25:26.984 "num_base_bdevs_discovered": 3, 00:25:26.984 "num_base_bdevs_operational": 3, 00:25:26.984 "base_bdevs_list": [ 00:25:26.984 { 00:25:26.984 "name": "BaseBdev1", 00:25:26.984 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:26.984 "is_configured": true, 00:25:26.984 "data_offset": 2048, 00:25:26.984 "data_size": 63488 00:25:26.984 }, 00:25:26.984 { 00:25:26.984 "name": "BaseBdev2", 00:25:26.984 "uuid": "181795fb-d8df-454a-902e-e3f57a2f7869", 00:25:26.984 "is_configured": true, 00:25:26.984 "data_offset": 2048, 00:25:26.984 "data_size": 63488 00:25:26.984 }, 00:25:26.984 { 00:25:26.984 "name": "BaseBdev3", 00:25:26.984 "uuid": "7df86e6d-53b3-474a-ae26-b20ad491d95e", 00:25:26.984 "is_configured": true, 00:25:26.984 "data_offset": 2048, 00:25:26.984 "data_size": 63488 00:25:26.984 } 00:25:26.984 ] 00:25:26.984 }' 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.984 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.553 [2024-11-26 17:21:57.445226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.553 "name": "Existed_Raid", 00:25:27.553 "aliases": [ 00:25:27.553 "28f19268-ec87-40e8-9968-fbf1e3b5553b" 00:25:27.553 ], 00:25:27.553 "product_name": "Raid Volume", 00:25:27.553 "block_size": 512, 00:25:27.553 "num_blocks": 190464, 00:25:27.553 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:27.553 "assigned_rate_limits": { 00:25:27.553 "rw_ios_per_sec": 0, 00:25:27.553 "rw_mbytes_per_sec": 0, 00:25:27.553 "r_mbytes_per_sec": 0, 00:25:27.553 "w_mbytes_per_sec": 0 00:25:27.553 }, 00:25:27.553 "claimed": false, 00:25:27.553 "zoned": false, 00:25:27.553 "supported_io_types": { 00:25:27.553 "read": true, 00:25:27.553 "write": true, 00:25:27.553 "unmap": true, 00:25:27.553 "flush": true, 00:25:27.553 "reset": true, 00:25:27.553 "nvme_admin": false, 00:25:27.553 "nvme_io": false, 00:25:27.553 "nvme_io_md": false, 00:25:27.553 "write_zeroes": true, 00:25:27.553 "zcopy": false, 00:25:27.553 "get_zone_info": false, 00:25:27.553 "zone_management": false, 00:25:27.553 "zone_append": false, 00:25:27.553 "compare": false, 00:25:27.553 "compare_and_write": false, 00:25:27.553 "abort": false, 00:25:27.553 "seek_hole": false, 00:25:27.553 "seek_data": false, 00:25:27.553 "copy": false, 00:25:27.553 "nvme_iov_md": false 00:25:27.553 }, 00:25:27.553 "memory_domains": [ 00:25:27.553 { 00:25:27.553 "dma_device_id": "system", 00:25:27.553 "dma_device_type": 1 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.553 "dma_device_type": 2 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "dma_device_id": "system", 00:25:27.553 "dma_device_type": 1 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.553 "dma_device_type": 2 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "dma_device_id": "system", 00:25:27.553 "dma_device_type": 1 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.553 "dma_device_type": 2 00:25:27.553 } 00:25:27.553 ], 00:25:27.553 "driver_specific": { 00:25:27.553 "raid": { 00:25:27.553 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:27.553 "strip_size_kb": 64, 00:25:27.553 "state": "online", 00:25:27.553 "raid_level": "concat", 00:25:27.553 "superblock": true, 00:25:27.553 "num_base_bdevs": 3, 00:25:27.553 "num_base_bdevs_discovered": 3, 00:25:27.553 "num_base_bdevs_operational": 3, 00:25:27.553 "base_bdevs_list": [ 00:25:27.553 { 00:25:27.553 "name": "BaseBdev1", 00:25:27.553 "uuid": "8dc1216b-b1a4-4c14-8b92-1c512ea78960", 00:25:27.553 "is_configured": true, 00:25:27.553 "data_offset": 2048, 00:25:27.553 "data_size": 63488 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "name": "BaseBdev2", 00:25:27.553 "uuid": "181795fb-d8df-454a-902e-e3f57a2f7869", 00:25:27.553 "is_configured": true, 00:25:27.553 "data_offset": 2048, 00:25:27.553 "data_size": 63488 00:25:27.553 }, 00:25:27.553 { 00:25:27.553 "name": "BaseBdev3", 00:25:27.553 "uuid": "7df86e6d-53b3-474a-ae26-b20ad491d95e", 00:25:27.553 "is_configured": true, 00:25:27.553 "data_offset": 2048, 00:25:27.553 "data_size": 63488 00:25:27.553 } 00:25:27.553 ] 00:25:27.553 } 00:25:27.553 } 00:25:27.553 }' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:27.553 BaseBdev2 00:25:27.553 BaseBdev3' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.553 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.813 [2024-11-26 17:21:57.736606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:27.813 [2024-11-26 17:21:57.736798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:27.813 [2024-11-26 17:21:57.736898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.813 "name": "Existed_Raid", 00:25:27.813 "uuid": "28f19268-ec87-40e8-9968-fbf1e3b5553b", 00:25:27.813 "strip_size_kb": 64, 00:25:27.813 "state": "offline", 00:25:27.813 "raid_level": "concat", 00:25:27.813 "superblock": true, 00:25:27.813 "num_base_bdevs": 3, 00:25:27.813 "num_base_bdevs_discovered": 2, 00:25:27.813 "num_base_bdevs_operational": 2, 00:25:27.813 "base_bdevs_list": [ 00:25:27.813 { 00:25:27.813 "name": null, 00:25:27.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.813 "is_configured": false, 00:25:27.813 "data_offset": 0, 00:25:27.813 "data_size": 63488 00:25:27.813 }, 00:25:27.813 { 00:25:27.813 "name": "BaseBdev2", 00:25:27.813 "uuid": "181795fb-d8df-454a-902e-e3f57a2f7869", 00:25:27.813 "is_configured": true, 00:25:27.813 "data_offset": 2048, 00:25:27.813 "data_size": 63488 00:25:27.813 }, 00:25:27.813 { 00:25:27.813 "name": "BaseBdev3", 00:25:27.813 "uuid": "7df86e6d-53b3-474a-ae26-b20ad491d95e", 00:25:27.813 "is_configured": true, 00:25:27.813 "data_offset": 2048, 00:25:27.813 "data_size": 63488 00:25:27.813 } 00:25:27.813 ] 00:25:27.813 }' 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.813 17:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.381 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:28.381 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:28.381 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.382 [2024-11-26 17:21:58.309670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.382 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.382 [2024-11-26 17:21:58.468444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:28.382 [2024-11-26 17:21:58.468672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 BaseBdev2 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 [ 00:25:28.798 { 00:25:28.798 "name": "BaseBdev2", 00:25:28.798 "aliases": [ 00:25:28.798 "99544588-89f1-4470-a5a9-6e6ff66a0a53" 00:25:28.798 ], 00:25:28.798 "product_name": "Malloc disk", 00:25:28.798 "block_size": 512, 00:25:28.798 "num_blocks": 65536, 00:25:28.798 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:28.798 "assigned_rate_limits": { 00:25:28.798 "rw_ios_per_sec": 0, 00:25:28.798 "rw_mbytes_per_sec": 0, 00:25:28.798 "r_mbytes_per_sec": 0, 00:25:28.798 "w_mbytes_per_sec": 0 00:25:28.798 }, 00:25:28.798 "claimed": false, 00:25:28.798 "zoned": false, 00:25:28.798 "supported_io_types": { 00:25:28.798 "read": true, 00:25:28.798 "write": true, 00:25:28.798 "unmap": true, 00:25:28.798 "flush": true, 00:25:28.798 "reset": true, 00:25:28.798 "nvme_admin": false, 00:25:28.798 "nvme_io": false, 00:25:28.798 "nvme_io_md": false, 00:25:28.798 "write_zeroes": true, 00:25:28.798 "zcopy": true, 00:25:28.798 "get_zone_info": false, 00:25:28.798 "zone_management": false, 00:25:28.798 "zone_append": false, 00:25:28.798 "compare": false, 00:25:28.798 "compare_and_write": false, 00:25:28.798 "abort": true, 00:25:28.798 "seek_hole": false, 00:25:28.798 "seek_data": false, 00:25:28.798 "copy": true, 00:25:28.798 "nvme_iov_md": false 00:25:28.798 }, 00:25:28.798 "memory_domains": [ 00:25:28.798 { 00:25:28.798 "dma_device_id": "system", 00:25:28.798 "dma_device_type": 1 00:25:28.798 }, 00:25:28.798 { 00:25:28.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.798 "dma_device_type": 2 00:25:28.798 } 00:25:28.798 ], 00:25:28.798 "driver_specific": {} 00:25:28.798 } 00:25:28.798 ] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 BaseBdev3 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 [ 00:25:28.798 { 00:25:28.798 "name": "BaseBdev3", 00:25:28.798 "aliases": [ 00:25:28.798 "ec5ae857-be2e-44b0-b13f-6eeed067fec0" 00:25:28.798 ], 00:25:28.798 "product_name": "Malloc disk", 00:25:28.798 "block_size": 512, 00:25:28.798 "num_blocks": 65536, 00:25:28.798 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:28.798 "assigned_rate_limits": { 00:25:28.798 "rw_ios_per_sec": 0, 00:25:28.798 "rw_mbytes_per_sec": 0, 00:25:28.798 "r_mbytes_per_sec": 0, 00:25:28.798 "w_mbytes_per_sec": 0 00:25:28.798 }, 00:25:28.798 "claimed": false, 00:25:28.798 "zoned": false, 00:25:28.798 "supported_io_types": { 00:25:28.798 "read": true, 00:25:28.798 "write": true, 00:25:28.798 "unmap": true, 00:25:28.798 "flush": true, 00:25:28.798 "reset": true, 00:25:28.798 "nvme_admin": false, 00:25:28.798 "nvme_io": false, 00:25:28.798 "nvme_io_md": false, 00:25:28.798 "write_zeroes": true, 00:25:28.798 "zcopy": true, 00:25:28.798 "get_zone_info": false, 00:25:28.798 "zone_management": false, 00:25:28.798 "zone_append": false, 00:25:28.798 "compare": false, 00:25:28.798 "compare_and_write": false, 00:25:28.798 "abort": true, 00:25:28.798 "seek_hole": false, 00:25:28.798 "seek_data": false, 00:25:28.798 "copy": true, 00:25:28.798 "nvme_iov_md": false 00:25:28.798 }, 00:25:28.798 "memory_domains": [ 00:25:28.798 { 00:25:28.798 "dma_device_id": "system", 00:25:28.798 "dma_device_type": 1 00:25:28.798 }, 00:25:28.798 { 00:25:28.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.798 "dma_device_type": 2 00:25:28.798 } 00:25:28.798 ], 00:25:28.798 "driver_specific": {} 00:25:28.798 } 00:25:28.798 ] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.798 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.798 [2024-11-26 17:21:58.818520] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.799 [2024-11-26 17:21:58.818745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.799 [2024-11-26 17:21:58.818872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:28.799 [2024-11-26 17:21:58.821534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.799 "name": "Existed_Raid", 00:25:28.799 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:28.799 "strip_size_kb": 64, 00:25:28.799 "state": "configuring", 00:25:28.799 "raid_level": "concat", 00:25:28.799 "superblock": true, 00:25:28.799 "num_base_bdevs": 3, 00:25:28.799 "num_base_bdevs_discovered": 2, 00:25:28.799 "num_base_bdevs_operational": 3, 00:25:28.799 "base_bdevs_list": [ 00:25:28.799 { 00:25:28.799 "name": "BaseBdev1", 00:25:28.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.799 "is_configured": false, 00:25:28.799 "data_offset": 0, 00:25:28.799 "data_size": 0 00:25:28.799 }, 00:25:28.799 { 00:25:28.799 "name": "BaseBdev2", 00:25:28.799 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:28.799 "is_configured": true, 00:25:28.799 "data_offset": 2048, 00:25:28.799 "data_size": 63488 00:25:28.799 }, 00:25:28.799 { 00:25:28.799 "name": "BaseBdev3", 00:25:28.799 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:28.799 "is_configured": true, 00:25:28.799 "data_offset": 2048, 00:25:28.799 "data_size": 63488 00:25:28.799 } 00:25:28.799 ] 00:25:28.799 }' 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.799 17:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.366 [2024-11-26 17:21:59.233914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.366 "name": "Existed_Raid", 00:25:29.366 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:29.366 "strip_size_kb": 64, 00:25:29.366 "state": "configuring", 00:25:29.366 "raid_level": "concat", 00:25:29.366 "superblock": true, 00:25:29.366 "num_base_bdevs": 3, 00:25:29.366 "num_base_bdevs_discovered": 1, 00:25:29.366 "num_base_bdevs_operational": 3, 00:25:29.366 "base_bdevs_list": [ 00:25:29.366 { 00:25:29.366 "name": "BaseBdev1", 00:25:29.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.366 "is_configured": false, 00:25:29.366 "data_offset": 0, 00:25:29.366 "data_size": 0 00:25:29.366 }, 00:25:29.366 { 00:25:29.366 "name": null, 00:25:29.366 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:29.366 "is_configured": false, 00:25:29.366 "data_offset": 0, 00:25:29.366 "data_size": 63488 00:25:29.366 }, 00:25:29.366 { 00:25:29.366 "name": "BaseBdev3", 00:25:29.366 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:29.366 "is_configured": true, 00:25:29.366 "data_offset": 2048, 00:25:29.366 "data_size": 63488 00:25:29.366 } 00:25:29.366 ] 00:25:29.366 }' 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.366 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.625 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 [2024-11-26 17:21:59.737402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.885 BaseBdev1 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.885 [ 00:25:29.885 { 00:25:29.885 "name": "BaseBdev1", 00:25:29.885 "aliases": [ 00:25:29.885 "cad76646-82bb-4cfd-a910-3a359c993f4f" 00:25:29.885 ], 00:25:29.885 "product_name": "Malloc disk", 00:25:29.885 "block_size": 512, 00:25:29.885 "num_blocks": 65536, 00:25:29.885 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:29.885 "assigned_rate_limits": { 00:25:29.885 "rw_ios_per_sec": 0, 00:25:29.885 "rw_mbytes_per_sec": 0, 00:25:29.885 "r_mbytes_per_sec": 0, 00:25:29.885 "w_mbytes_per_sec": 0 00:25:29.885 }, 00:25:29.885 "claimed": true, 00:25:29.885 "claim_type": "exclusive_write", 00:25:29.885 "zoned": false, 00:25:29.885 "supported_io_types": { 00:25:29.885 "read": true, 00:25:29.885 "write": true, 00:25:29.885 "unmap": true, 00:25:29.885 "flush": true, 00:25:29.885 "reset": true, 00:25:29.885 "nvme_admin": false, 00:25:29.885 "nvme_io": false, 00:25:29.885 "nvme_io_md": false, 00:25:29.885 "write_zeroes": true, 00:25:29.885 "zcopy": true, 00:25:29.885 "get_zone_info": false, 00:25:29.885 "zone_management": false, 00:25:29.885 "zone_append": false, 00:25:29.885 "compare": false, 00:25:29.885 "compare_and_write": false, 00:25:29.885 "abort": true, 00:25:29.885 "seek_hole": false, 00:25:29.885 "seek_data": false, 00:25:29.885 "copy": true, 00:25:29.885 "nvme_iov_md": false 00:25:29.885 }, 00:25:29.885 "memory_domains": [ 00:25:29.885 { 00:25:29.885 "dma_device_id": "system", 00:25:29.885 "dma_device_type": 1 00:25:29.885 }, 00:25:29.885 { 00:25:29.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.885 "dma_device_type": 2 00:25:29.885 } 00:25:29.885 ], 00:25:29.885 "driver_specific": {} 00:25:29.885 } 00:25:29.885 ] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.885 "name": "Existed_Raid", 00:25:29.885 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:29.885 "strip_size_kb": 64, 00:25:29.885 "state": "configuring", 00:25:29.885 "raid_level": "concat", 00:25:29.885 "superblock": true, 00:25:29.885 "num_base_bdevs": 3, 00:25:29.885 "num_base_bdevs_discovered": 2, 00:25:29.885 "num_base_bdevs_operational": 3, 00:25:29.885 "base_bdevs_list": [ 00:25:29.885 { 00:25:29.885 "name": "BaseBdev1", 00:25:29.885 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:29.885 "is_configured": true, 00:25:29.885 "data_offset": 2048, 00:25:29.885 "data_size": 63488 00:25:29.885 }, 00:25:29.885 { 00:25:29.885 "name": null, 00:25:29.885 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:29.885 "is_configured": false, 00:25:29.885 "data_offset": 0, 00:25:29.885 "data_size": 63488 00:25:29.885 }, 00:25:29.885 { 00:25:29.885 "name": "BaseBdev3", 00:25:29.885 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:29.885 "is_configured": true, 00:25:29.885 "data_offset": 2048, 00:25:29.885 "data_size": 63488 00:25:29.885 } 00:25:29.885 ] 00:25:29.885 }' 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.885 17:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.144 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.144 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.144 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:30.144 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.403 [2024-11-26 17:22:00.288706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.403 "name": "Existed_Raid", 00:25:30.403 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:30.403 "strip_size_kb": 64, 00:25:30.403 "state": "configuring", 00:25:30.403 "raid_level": "concat", 00:25:30.403 "superblock": true, 00:25:30.403 "num_base_bdevs": 3, 00:25:30.403 "num_base_bdevs_discovered": 1, 00:25:30.403 "num_base_bdevs_operational": 3, 00:25:30.403 "base_bdevs_list": [ 00:25:30.403 { 00:25:30.403 "name": "BaseBdev1", 00:25:30.403 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:30.403 "is_configured": true, 00:25:30.403 "data_offset": 2048, 00:25:30.403 "data_size": 63488 00:25:30.403 }, 00:25:30.403 { 00:25:30.403 "name": null, 00:25:30.403 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:30.403 "is_configured": false, 00:25:30.403 "data_offset": 0, 00:25:30.403 "data_size": 63488 00:25:30.403 }, 00:25:30.403 { 00:25:30.403 "name": null, 00:25:30.403 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:30.403 "is_configured": false, 00:25:30.403 "data_offset": 0, 00:25:30.403 "data_size": 63488 00:25:30.403 } 00:25:30.403 ] 00:25:30.403 }' 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.403 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 [2024-11-26 17:22:00.756058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.662 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.920 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.920 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.920 "name": "Existed_Raid", 00:25:30.920 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:30.920 "strip_size_kb": 64, 00:25:30.920 "state": "configuring", 00:25:30.920 "raid_level": "concat", 00:25:30.920 "superblock": true, 00:25:30.920 "num_base_bdevs": 3, 00:25:30.920 "num_base_bdevs_discovered": 2, 00:25:30.920 "num_base_bdevs_operational": 3, 00:25:30.920 "base_bdevs_list": [ 00:25:30.920 { 00:25:30.920 "name": "BaseBdev1", 00:25:30.920 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:30.920 "is_configured": true, 00:25:30.920 "data_offset": 2048, 00:25:30.920 "data_size": 63488 00:25:30.920 }, 00:25:30.920 { 00:25:30.921 "name": null, 00:25:30.921 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:30.921 "is_configured": false, 00:25:30.921 "data_offset": 0, 00:25:30.921 "data_size": 63488 00:25:30.921 }, 00:25:30.921 { 00:25:30.921 "name": "BaseBdev3", 00:25:30.921 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:30.921 "is_configured": true, 00:25:30.921 "data_offset": 2048, 00:25:30.921 "data_size": 63488 00:25:30.921 } 00:25:30.921 ] 00:25:30.921 }' 00:25:30.921 17:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.921 17:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.178 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.178 [2024-11-26 17:22:01.247391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.438 "name": "Existed_Raid", 00:25:31.438 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:31.438 "strip_size_kb": 64, 00:25:31.438 "state": "configuring", 00:25:31.438 "raid_level": "concat", 00:25:31.438 "superblock": true, 00:25:31.438 "num_base_bdevs": 3, 00:25:31.438 "num_base_bdevs_discovered": 1, 00:25:31.438 "num_base_bdevs_operational": 3, 00:25:31.438 "base_bdevs_list": [ 00:25:31.438 { 00:25:31.438 "name": null, 00:25:31.438 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:31.438 "is_configured": false, 00:25:31.438 "data_offset": 0, 00:25:31.438 "data_size": 63488 00:25:31.438 }, 00:25:31.438 { 00:25:31.438 "name": null, 00:25:31.438 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:31.438 "is_configured": false, 00:25:31.438 "data_offset": 0, 00:25:31.438 "data_size": 63488 00:25:31.438 }, 00:25:31.438 { 00:25:31.438 "name": "BaseBdev3", 00:25:31.438 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:31.438 "is_configured": true, 00:25:31.438 "data_offset": 2048, 00:25:31.438 "data_size": 63488 00:25:31.438 } 00:25:31.438 ] 00:25:31.438 }' 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.438 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.003 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.004 [2024-11-26 17:22:01.889686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.004 "name": "Existed_Raid", 00:25:32.004 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:32.004 "strip_size_kb": 64, 00:25:32.004 "state": "configuring", 00:25:32.004 "raid_level": "concat", 00:25:32.004 "superblock": true, 00:25:32.004 "num_base_bdevs": 3, 00:25:32.004 "num_base_bdevs_discovered": 2, 00:25:32.004 "num_base_bdevs_operational": 3, 00:25:32.004 "base_bdevs_list": [ 00:25:32.004 { 00:25:32.004 "name": null, 00:25:32.004 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:32.004 "is_configured": false, 00:25:32.004 "data_offset": 0, 00:25:32.004 "data_size": 63488 00:25:32.004 }, 00:25:32.004 { 00:25:32.004 "name": "BaseBdev2", 00:25:32.004 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:32.004 "is_configured": true, 00:25:32.004 "data_offset": 2048, 00:25:32.004 "data_size": 63488 00:25:32.004 }, 00:25:32.004 { 00:25:32.004 "name": "BaseBdev3", 00:25:32.004 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:32.004 "is_configured": true, 00:25:32.004 "data_offset": 2048, 00:25:32.004 "data_size": 63488 00:25:32.004 } 00:25:32.004 ] 00:25:32.004 }' 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.004 17:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.263 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.263 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.263 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.263 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:32.263 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cad76646-82bb-4cfd-a910-3a359c993f4f 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.522 [2024-11-26 17:22:02.470705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:32.522 [2024-11-26 17:22:02.470975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:32.522 [2024-11-26 17:22:02.470997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:32.522 [2024-11-26 17:22:02.471278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:32.522 [2024-11-26 17:22:02.471429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:32.522 [2024-11-26 17:22:02.471440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:32.522 NewBaseBdev 00:25:32.522 [2024-11-26 17:22:02.471606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.522 [ 00:25:32.522 { 00:25:32.522 "name": "NewBaseBdev", 00:25:32.522 "aliases": [ 00:25:32.522 "cad76646-82bb-4cfd-a910-3a359c993f4f" 00:25:32.522 ], 00:25:32.522 "product_name": "Malloc disk", 00:25:32.522 "block_size": 512, 00:25:32.522 "num_blocks": 65536, 00:25:32.522 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:32.522 "assigned_rate_limits": { 00:25:32.522 "rw_ios_per_sec": 0, 00:25:32.522 "rw_mbytes_per_sec": 0, 00:25:32.522 "r_mbytes_per_sec": 0, 00:25:32.522 "w_mbytes_per_sec": 0 00:25:32.522 }, 00:25:32.522 "claimed": true, 00:25:32.522 "claim_type": "exclusive_write", 00:25:32.522 "zoned": false, 00:25:32.522 "supported_io_types": { 00:25:32.522 "read": true, 00:25:32.522 "write": true, 00:25:32.522 "unmap": true, 00:25:32.522 "flush": true, 00:25:32.522 "reset": true, 00:25:32.522 "nvme_admin": false, 00:25:32.522 "nvme_io": false, 00:25:32.522 "nvme_io_md": false, 00:25:32.522 "write_zeroes": true, 00:25:32.522 "zcopy": true, 00:25:32.522 "get_zone_info": false, 00:25:32.522 "zone_management": false, 00:25:32.522 "zone_append": false, 00:25:32.522 "compare": false, 00:25:32.522 "compare_and_write": false, 00:25:32.522 "abort": true, 00:25:32.522 "seek_hole": false, 00:25:32.522 "seek_data": false, 00:25:32.522 "copy": true, 00:25:32.522 "nvme_iov_md": false 00:25:32.522 }, 00:25:32.522 "memory_domains": [ 00:25:32.522 { 00:25:32.522 "dma_device_id": "system", 00:25:32.522 "dma_device_type": 1 00:25:32.522 }, 00:25:32.522 { 00:25:32.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.522 "dma_device_type": 2 00:25:32.522 } 00:25:32.522 ], 00:25:32.522 "driver_specific": {} 00:25:32.522 } 00:25:32.522 ] 00:25:32.522 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.523 "name": "Existed_Raid", 00:25:32.523 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:32.523 "strip_size_kb": 64, 00:25:32.523 "state": "online", 00:25:32.523 "raid_level": "concat", 00:25:32.523 "superblock": true, 00:25:32.523 "num_base_bdevs": 3, 00:25:32.523 "num_base_bdevs_discovered": 3, 00:25:32.523 "num_base_bdevs_operational": 3, 00:25:32.523 "base_bdevs_list": [ 00:25:32.523 { 00:25:32.523 "name": "NewBaseBdev", 00:25:32.523 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:32.523 "is_configured": true, 00:25:32.523 "data_offset": 2048, 00:25:32.523 "data_size": 63488 00:25:32.523 }, 00:25:32.523 { 00:25:32.523 "name": "BaseBdev2", 00:25:32.523 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:32.523 "is_configured": true, 00:25:32.523 "data_offset": 2048, 00:25:32.523 "data_size": 63488 00:25:32.523 }, 00:25:32.523 { 00:25:32.523 "name": "BaseBdev3", 00:25:32.523 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:32.523 "is_configured": true, 00:25:32.523 "data_offset": 2048, 00:25:32.523 "data_size": 63488 00:25:32.523 } 00:25:32.523 ] 00:25:32.523 }' 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.523 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.091 17:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.091 [2024-11-26 17:22:02.974428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.091 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.091 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.091 "name": "Existed_Raid", 00:25:33.091 "aliases": [ 00:25:33.091 "ba0eaad1-65bd-4f50-80ac-886c10d594f0" 00:25:33.091 ], 00:25:33.091 "product_name": "Raid Volume", 00:25:33.091 "block_size": 512, 00:25:33.091 "num_blocks": 190464, 00:25:33.091 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:33.091 "assigned_rate_limits": { 00:25:33.091 "rw_ios_per_sec": 0, 00:25:33.091 "rw_mbytes_per_sec": 0, 00:25:33.091 "r_mbytes_per_sec": 0, 00:25:33.091 "w_mbytes_per_sec": 0 00:25:33.091 }, 00:25:33.091 "claimed": false, 00:25:33.091 "zoned": false, 00:25:33.091 "supported_io_types": { 00:25:33.091 "read": true, 00:25:33.091 "write": true, 00:25:33.091 "unmap": true, 00:25:33.091 "flush": true, 00:25:33.091 "reset": true, 00:25:33.092 "nvme_admin": false, 00:25:33.092 "nvme_io": false, 00:25:33.092 "nvme_io_md": false, 00:25:33.092 "write_zeroes": true, 00:25:33.092 "zcopy": false, 00:25:33.092 "get_zone_info": false, 00:25:33.092 "zone_management": false, 00:25:33.092 "zone_append": false, 00:25:33.092 "compare": false, 00:25:33.092 "compare_and_write": false, 00:25:33.092 "abort": false, 00:25:33.092 "seek_hole": false, 00:25:33.092 "seek_data": false, 00:25:33.092 "copy": false, 00:25:33.092 "nvme_iov_md": false 00:25:33.092 }, 00:25:33.092 "memory_domains": [ 00:25:33.092 { 00:25:33.092 "dma_device_id": "system", 00:25:33.092 "dma_device_type": 1 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.092 "dma_device_type": 2 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "dma_device_id": "system", 00:25:33.092 "dma_device_type": 1 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.092 "dma_device_type": 2 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "dma_device_id": "system", 00:25:33.092 "dma_device_type": 1 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.092 "dma_device_type": 2 00:25:33.092 } 00:25:33.092 ], 00:25:33.092 "driver_specific": { 00:25:33.092 "raid": { 00:25:33.092 "uuid": "ba0eaad1-65bd-4f50-80ac-886c10d594f0", 00:25:33.092 "strip_size_kb": 64, 00:25:33.092 "state": "online", 00:25:33.092 "raid_level": "concat", 00:25:33.092 "superblock": true, 00:25:33.092 "num_base_bdevs": 3, 00:25:33.092 "num_base_bdevs_discovered": 3, 00:25:33.092 "num_base_bdevs_operational": 3, 00:25:33.092 "base_bdevs_list": [ 00:25:33.092 { 00:25:33.092 "name": "NewBaseBdev", 00:25:33.092 "uuid": "cad76646-82bb-4cfd-a910-3a359c993f4f", 00:25:33.092 "is_configured": true, 00:25:33.092 "data_offset": 2048, 00:25:33.092 "data_size": 63488 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "name": "BaseBdev2", 00:25:33.092 "uuid": "99544588-89f1-4470-a5a9-6e6ff66a0a53", 00:25:33.092 "is_configured": true, 00:25:33.092 "data_offset": 2048, 00:25:33.092 "data_size": 63488 00:25:33.092 }, 00:25:33.092 { 00:25:33.092 "name": "BaseBdev3", 00:25:33.092 "uuid": "ec5ae857-be2e-44b0-b13f-6eeed067fec0", 00:25:33.092 "is_configured": true, 00:25:33.092 "data_offset": 2048, 00:25:33.092 "data_size": 63488 00:25:33.092 } 00:25:33.092 ] 00:25:33.092 } 00:25:33.092 } 00:25:33.092 }' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:33.092 BaseBdev2 00:25:33.092 BaseBdev3' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.092 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.351 [2024-11-26 17:22:03.241735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:33.351 [2024-11-26 17:22:03.241886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.351 [2024-11-26 17:22:03.242132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.351 [2024-11-26 17:22:03.242298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.351 [2024-11-26 17:22:03.242386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66325 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66325 ']' 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66325 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66325 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.351 killing process with pid 66325 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66325' 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66325 00:25:33.351 [2024-11-26 17:22:03.294219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.351 17:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66325 00:25:33.610 [2024-11-26 17:22:03.616302] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:35.005 17:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:35.005 00:25:35.005 real 0m10.722s 00:25:35.005 user 0m16.815s 00:25:35.005 sys 0m2.256s 00:25:35.005 17:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.005 ************************************ 00:25:35.005 END TEST raid_state_function_test_sb 00:25:35.005 ************************************ 00:25:35.005 17:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 17:22:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:25:35.005 17:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:35.005 17:22:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.005 17:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 ************************************ 00:25:35.005 START TEST raid_superblock_test 00:25:35.005 ************************************ 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66951 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66951 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66951 ']' 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.005 17:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 [2024-11-26 17:22:04.991614] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:35.005 [2024-11-26 17:22:04.992613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66951 ] 00:25:35.264 [2024-11-26 17:22:05.175488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.264 [2024-11-26 17:22:05.320603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.523 [2024-11-26 17:22:05.543665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.523 [2024-11-26 17:22:05.543743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.782 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 malloc1 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 [2024-11-26 17:22:05.922421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:36.042 [2024-11-26 17:22:05.922673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.042 [2024-11-26 17:22:05.922808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:36.042 [2024-11-26 17:22:05.922900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.042 [2024-11-26 17:22:05.925951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.042 [2024-11-26 17:22:05.926121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:36.042 pt1 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 malloc2 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 [2024-11-26 17:22:05.986857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:36.042 [2024-11-26 17:22:05.987073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.042 [2024-11-26 17:22:05.987190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:36.042 [2024-11-26 17:22:05.987268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.042 [2024-11-26 17:22:05.990191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.042 [2024-11-26 17:22:05.990346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:36.042 pt2 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 malloc3 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 [2024-11-26 17:22:06.057758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:36.042 [2024-11-26 17:22:06.057841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.042 [2024-11-26 17:22:06.057871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:36.042 [2024-11-26 17:22:06.057885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.042 [2024-11-26 17:22:06.060994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.042 [2024-11-26 17:22:06.061154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:36.042 pt3 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 [2024-11-26 17:22:06.065958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:36.042 [2024-11-26 17:22:06.068409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.042 [2024-11-26 17:22:06.068486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:36.042 [2024-11-26 17:22:06.068687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:36.042 [2024-11-26 17:22:06.068710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:36.042 [2024-11-26 17:22:06.069012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:36.042 [2024-11-26 17:22:06.069205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:36.042 [2024-11-26 17:22:06.069217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:36.042 [2024-11-26 17:22:06.069424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.042 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.042 "name": "raid_bdev1", 00:25:36.042 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:36.042 "strip_size_kb": 64, 00:25:36.042 "state": "online", 00:25:36.042 "raid_level": "concat", 00:25:36.042 "superblock": true, 00:25:36.042 "num_base_bdevs": 3, 00:25:36.042 "num_base_bdevs_discovered": 3, 00:25:36.042 "num_base_bdevs_operational": 3, 00:25:36.043 "base_bdevs_list": [ 00:25:36.043 { 00:25:36.043 "name": "pt1", 00:25:36.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.043 "is_configured": true, 00:25:36.043 "data_offset": 2048, 00:25:36.043 "data_size": 63488 00:25:36.043 }, 00:25:36.043 { 00:25:36.043 "name": "pt2", 00:25:36.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.043 "is_configured": true, 00:25:36.043 "data_offset": 2048, 00:25:36.043 "data_size": 63488 00:25:36.043 }, 00:25:36.043 { 00:25:36.043 "name": "pt3", 00:25:36.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.043 "is_configured": true, 00:25:36.043 "data_offset": 2048, 00:25:36.043 "data_size": 63488 00:25:36.043 } 00:25:36.043 ] 00:25:36.043 }' 00:25:36.043 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.043 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:36.611 [2024-11-26 17:22:06.493973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.611 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.611 "name": "raid_bdev1", 00:25:36.611 "aliases": [ 00:25:36.611 "1a9881a6-89a5-471c-9bce-306b1a94e7e9" 00:25:36.611 ], 00:25:36.611 "product_name": "Raid Volume", 00:25:36.611 "block_size": 512, 00:25:36.611 "num_blocks": 190464, 00:25:36.611 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:36.611 "assigned_rate_limits": { 00:25:36.611 "rw_ios_per_sec": 0, 00:25:36.611 "rw_mbytes_per_sec": 0, 00:25:36.611 "r_mbytes_per_sec": 0, 00:25:36.611 "w_mbytes_per_sec": 0 00:25:36.611 }, 00:25:36.611 "claimed": false, 00:25:36.611 "zoned": false, 00:25:36.611 "supported_io_types": { 00:25:36.611 "read": true, 00:25:36.611 "write": true, 00:25:36.611 "unmap": true, 00:25:36.611 "flush": true, 00:25:36.611 "reset": true, 00:25:36.611 "nvme_admin": false, 00:25:36.611 "nvme_io": false, 00:25:36.611 "nvme_io_md": false, 00:25:36.611 "write_zeroes": true, 00:25:36.611 "zcopy": false, 00:25:36.611 "get_zone_info": false, 00:25:36.611 "zone_management": false, 00:25:36.611 "zone_append": false, 00:25:36.611 "compare": false, 00:25:36.611 "compare_and_write": false, 00:25:36.611 "abort": false, 00:25:36.611 "seek_hole": false, 00:25:36.611 "seek_data": false, 00:25:36.611 "copy": false, 00:25:36.611 "nvme_iov_md": false 00:25:36.611 }, 00:25:36.611 "memory_domains": [ 00:25:36.611 { 00:25:36.611 "dma_device_id": "system", 00:25:36.611 "dma_device_type": 1 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.611 "dma_device_type": 2 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "dma_device_id": "system", 00:25:36.611 "dma_device_type": 1 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.611 "dma_device_type": 2 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "dma_device_id": "system", 00:25:36.611 "dma_device_type": 1 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.611 "dma_device_type": 2 00:25:36.611 } 00:25:36.611 ], 00:25:36.611 "driver_specific": { 00:25:36.611 "raid": { 00:25:36.611 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:36.611 "strip_size_kb": 64, 00:25:36.611 "state": "online", 00:25:36.611 "raid_level": "concat", 00:25:36.611 "superblock": true, 00:25:36.611 "num_base_bdevs": 3, 00:25:36.611 "num_base_bdevs_discovered": 3, 00:25:36.611 "num_base_bdevs_operational": 3, 00:25:36.611 "base_bdevs_list": [ 00:25:36.611 { 00:25:36.611 "name": "pt1", 00:25:36.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.611 "is_configured": true, 00:25:36.611 "data_offset": 2048, 00:25:36.611 "data_size": 63488 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "name": "pt2", 00:25:36.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.611 "is_configured": true, 00:25:36.611 "data_offset": 2048, 00:25:36.611 "data_size": 63488 00:25:36.611 }, 00:25:36.611 { 00:25:36.611 "name": "pt3", 00:25:36.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.611 "is_configured": true, 00:25:36.611 "data_offset": 2048, 00:25:36.611 "data_size": 63488 00:25:36.611 } 00:25:36.611 ] 00:25:36.611 } 00:25:36.612 } 00:25:36.612 }' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:36.612 pt2 00:25:36.612 pt3' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.612 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:36.872 [2024-11-26 17:22:06.733952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1a9881a6-89a5-471c-9bce-306b1a94e7e9 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1a9881a6-89a5-471c-9bce-306b1a94e7e9 ']' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 [2024-11-26 17:22:06.781656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.872 [2024-11-26 17:22:06.781829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.872 [2024-11-26 17:22:06.781983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.872 [2024-11-26 17:22:06.782081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.872 [2024-11-26 17:22:06.782100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 [2024-11-26 17:22:06.925750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:36.872 [2024-11-26 17:22:06.928370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:36.872 [2024-11-26 17:22:06.928431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:36.872 [2024-11-26 17:22:06.928494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:36.872 [2024-11-26 17:22:06.928579] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:36.872 [2024-11-26 17:22:06.928605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:36.872 [2024-11-26 17:22:06.928629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.872 [2024-11-26 17:22:06.928641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:36.872 request: 00:25:36.872 { 00:25:36.872 "name": "raid_bdev1", 00:25:36.872 "raid_level": "concat", 00:25:36.872 "base_bdevs": [ 00:25:36.872 "malloc1", 00:25:36.872 "malloc2", 00:25:36.872 "malloc3" 00:25:36.872 ], 00:25:36.872 "strip_size_kb": 64, 00:25:36.872 "superblock": false, 00:25:36.872 "method": "bdev_raid_create", 00:25:36.872 "req_id": 1 00:25:36.872 } 00:25:36.872 Got JSON-RPC error response 00:25:36.872 response: 00:25:36.872 { 00:25:36.872 "code": -17, 00:25:36.872 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:36.872 } 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:36.872 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.131 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:37.131 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:37.131 17:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:37.131 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.131 17:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.131 [2024-11-26 17:22:07.001656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:37.131 [2024-11-26 17:22:07.001739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.131 [2024-11-26 17:22:07.001769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:37.131 [2024-11-26 17:22:07.001782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.131 [2024-11-26 17:22:07.004743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.131 [2024-11-26 17:22:07.004784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:37.131 [2024-11-26 17:22:07.004891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:37.131 [2024-11-26 17:22:07.004957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:37.131 pt1 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.131 "name": "raid_bdev1", 00:25:37.131 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:37.131 "strip_size_kb": 64, 00:25:37.131 "state": "configuring", 00:25:37.131 "raid_level": "concat", 00:25:37.131 "superblock": true, 00:25:37.131 "num_base_bdevs": 3, 00:25:37.131 "num_base_bdevs_discovered": 1, 00:25:37.131 "num_base_bdevs_operational": 3, 00:25:37.131 "base_bdevs_list": [ 00:25:37.131 { 00:25:37.131 "name": "pt1", 00:25:37.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.131 "is_configured": true, 00:25:37.131 "data_offset": 2048, 00:25:37.131 "data_size": 63488 00:25:37.131 }, 00:25:37.131 { 00:25:37.131 "name": null, 00:25:37.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.131 "is_configured": false, 00:25:37.131 "data_offset": 2048, 00:25:37.131 "data_size": 63488 00:25:37.131 }, 00:25:37.131 { 00:25:37.131 "name": null, 00:25:37.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.131 "is_configured": false, 00:25:37.131 "data_offset": 2048, 00:25:37.131 "data_size": 63488 00:25:37.131 } 00:25:37.131 ] 00:25:37.131 }' 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.131 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 [2024-11-26 17:22:07.469668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.390 [2024-11-26 17:22:07.469924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.390 [2024-11-26 17:22:07.470110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:37.390 [2024-11-26 17:22:07.470135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.390 [2024-11-26 17:22:07.470746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.390 [2024-11-26 17:22:07.470777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.390 [2024-11-26 17:22:07.470893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:37.390 [2024-11-26 17:22:07.470930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.390 pt2 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 [2024-11-26 17:22:07.477671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.390 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.649 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.649 "name": "raid_bdev1", 00:25:37.649 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:37.649 "strip_size_kb": 64, 00:25:37.649 "state": "configuring", 00:25:37.649 "raid_level": "concat", 00:25:37.649 "superblock": true, 00:25:37.649 "num_base_bdevs": 3, 00:25:37.649 "num_base_bdevs_discovered": 1, 00:25:37.649 "num_base_bdevs_operational": 3, 00:25:37.649 "base_bdevs_list": [ 00:25:37.649 { 00:25:37.649 "name": "pt1", 00:25:37.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.649 "is_configured": true, 00:25:37.649 "data_offset": 2048, 00:25:37.649 "data_size": 63488 00:25:37.649 }, 00:25:37.649 { 00:25:37.649 "name": null, 00:25:37.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.649 "is_configured": false, 00:25:37.649 "data_offset": 0, 00:25:37.649 "data_size": 63488 00:25:37.649 }, 00:25:37.649 { 00:25:37.649 "name": null, 00:25:37.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.649 "is_configured": false, 00:25:37.649 "data_offset": 2048, 00:25:37.649 "data_size": 63488 00:25:37.649 } 00:25:37.649 ] 00:25:37.649 }' 00:25:37.649 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.649 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 [2024-11-26 17:22:07.837632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.909 [2024-11-26 17:22:07.837868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.909 [2024-11-26 17:22:07.837903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:37.909 [2024-11-26 17:22:07.837919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.909 [2024-11-26 17:22:07.838487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.909 [2024-11-26 17:22:07.838532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.909 [2024-11-26 17:22:07.838638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:37.909 [2024-11-26 17:22:07.838669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.909 pt2 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 [2024-11-26 17:22:07.849627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:37.909 [2024-11-26 17:22:07.849693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.909 [2024-11-26 17:22:07.849716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:37.909 [2024-11-26 17:22:07.849731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.909 [2024-11-26 17:22:07.850224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.909 [2024-11-26 17:22:07.850251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:37.909 [2024-11-26 17:22:07.850335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:37.909 [2024-11-26 17:22:07.850361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:37.909 [2024-11-26 17:22:07.850486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:37.909 [2024-11-26 17:22:07.850501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:37.909 [2024-11-26 17:22:07.850832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:37.909 [2024-11-26 17:22:07.850998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:37.909 [2024-11-26 17:22:07.851008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:37.909 [2024-11-26 17:22:07.851149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.909 pt3 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.909 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.909 "name": "raid_bdev1", 00:25:37.909 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:37.909 "strip_size_kb": 64, 00:25:37.909 "state": "online", 00:25:37.909 "raid_level": "concat", 00:25:37.909 "superblock": true, 00:25:37.909 "num_base_bdevs": 3, 00:25:37.910 "num_base_bdevs_discovered": 3, 00:25:37.910 "num_base_bdevs_operational": 3, 00:25:37.910 "base_bdevs_list": [ 00:25:37.910 { 00:25:37.910 "name": "pt1", 00:25:37.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.910 "is_configured": true, 00:25:37.910 "data_offset": 2048, 00:25:37.910 "data_size": 63488 00:25:37.910 }, 00:25:37.910 { 00:25:37.910 "name": "pt2", 00:25:37.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.910 "is_configured": true, 00:25:37.910 "data_offset": 2048, 00:25:37.910 "data_size": 63488 00:25:37.910 }, 00:25:37.910 { 00:25:37.910 "name": "pt3", 00:25:37.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.910 "is_configured": true, 00:25:37.910 "data_offset": 2048, 00:25:37.910 "data_size": 63488 00:25:37.910 } 00:25:37.910 ] 00:25:37.910 }' 00:25:37.910 17:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.910 17:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.168 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:38.168 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:38.168 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:38.168 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.169 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 [2024-11-26 17:22:08.286016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.428 "name": "raid_bdev1", 00:25:38.428 "aliases": [ 00:25:38.428 "1a9881a6-89a5-471c-9bce-306b1a94e7e9" 00:25:38.428 ], 00:25:38.428 "product_name": "Raid Volume", 00:25:38.428 "block_size": 512, 00:25:38.428 "num_blocks": 190464, 00:25:38.428 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:38.428 "assigned_rate_limits": { 00:25:38.428 "rw_ios_per_sec": 0, 00:25:38.428 "rw_mbytes_per_sec": 0, 00:25:38.428 "r_mbytes_per_sec": 0, 00:25:38.428 "w_mbytes_per_sec": 0 00:25:38.428 }, 00:25:38.428 "claimed": false, 00:25:38.428 "zoned": false, 00:25:38.428 "supported_io_types": { 00:25:38.428 "read": true, 00:25:38.428 "write": true, 00:25:38.428 "unmap": true, 00:25:38.428 "flush": true, 00:25:38.428 "reset": true, 00:25:38.428 "nvme_admin": false, 00:25:38.428 "nvme_io": false, 00:25:38.428 "nvme_io_md": false, 00:25:38.428 "write_zeroes": true, 00:25:38.428 "zcopy": false, 00:25:38.428 "get_zone_info": false, 00:25:38.428 "zone_management": false, 00:25:38.428 "zone_append": false, 00:25:38.428 "compare": false, 00:25:38.428 "compare_and_write": false, 00:25:38.428 "abort": false, 00:25:38.428 "seek_hole": false, 00:25:38.428 "seek_data": false, 00:25:38.428 "copy": false, 00:25:38.428 "nvme_iov_md": false 00:25:38.428 }, 00:25:38.428 "memory_domains": [ 00:25:38.428 { 00:25:38.428 "dma_device_id": "system", 00:25:38.428 "dma_device_type": 1 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.428 "dma_device_type": 2 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "dma_device_id": "system", 00:25:38.428 "dma_device_type": 1 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.428 "dma_device_type": 2 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "dma_device_id": "system", 00:25:38.428 "dma_device_type": 1 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.428 "dma_device_type": 2 00:25:38.428 } 00:25:38.428 ], 00:25:38.428 "driver_specific": { 00:25:38.428 "raid": { 00:25:38.428 "uuid": "1a9881a6-89a5-471c-9bce-306b1a94e7e9", 00:25:38.428 "strip_size_kb": 64, 00:25:38.428 "state": "online", 00:25:38.428 "raid_level": "concat", 00:25:38.428 "superblock": true, 00:25:38.428 "num_base_bdevs": 3, 00:25:38.428 "num_base_bdevs_discovered": 3, 00:25:38.428 "num_base_bdevs_operational": 3, 00:25:38.428 "base_bdevs_list": [ 00:25:38.428 { 00:25:38.428 "name": "pt1", 00:25:38.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:38.428 "is_configured": true, 00:25:38.428 "data_offset": 2048, 00:25:38.428 "data_size": 63488 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "name": "pt2", 00:25:38.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.428 "is_configured": true, 00:25:38.428 "data_offset": 2048, 00:25:38.428 "data_size": 63488 00:25:38.428 }, 00:25:38.428 { 00:25:38.428 "name": "pt3", 00:25:38.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:38.428 "is_configured": true, 00:25:38.428 "data_offset": 2048, 00:25:38.428 "data_size": 63488 00:25:38.428 } 00:25:38.428 ] 00:25:38.428 } 00:25:38.428 } 00:25:38.428 }' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:38.428 pt2 00:25:38.428 pt3' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:38.688 [2024-11-26 17:22:08.557986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1a9881a6-89a5-471c-9bce-306b1a94e7e9 '!=' 1a9881a6-89a5-471c-9bce-306b1a94e7e9 ']' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66951 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66951 ']' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66951 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66951 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.688 killing process with pid 66951 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66951' 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66951 00:25:38.688 [2024-11-26 17:22:08.636453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.688 17:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66951 00:25:38.688 [2024-11-26 17:22:08.636592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.688 [2024-11-26 17:22:08.636668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.688 [2024-11-26 17:22:08.636685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:38.947 [2024-11-26 17:22:08.964470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.328 ************************************ 00:25:40.328 END TEST raid_superblock_test 00:25:40.328 ************************************ 00:25:40.328 17:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:40.328 00:25:40.328 real 0m5.308s 00:25:40.328 user 0m7.466s 00:25:40.328 sys 0m1.046s 00:25:40.328 17:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.328 17:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.328 17:22:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:25:40.328 17:22:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:40.328 17:22:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.328 17:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.328 ************************************ 00:25:40.328 START TEST raid_read_error_test 00:25:40.328 ************************************ 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cxCEvaJvlQ 00:25:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67204 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67204 00:25:40.328 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67204 ']' 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.329 17:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.329 [2024-11-26 17:22:10.404284] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:40.329 [2024-11-26 17:22:10.404650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67204 ] 00:25:40.587 [2024-11-26 17:22:10.589286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.846 [2024-11-26 17:22:10.735227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.846 [2024-11-26 17:22:10.945939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.846 [2024-11-26 17:22:10.946211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 BaseBdev1_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 true 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 [2024-11-26 17:22:11.313008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:41.415 [2024-11-26 17:22:11.313076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.415 [2024-11-26 17:22:11.313102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:41.415 [2024-11-26 17:22:11.313116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.415 [2024-11-26 17:22:11.315720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.415 [2024-11-26 17:22:11.315765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:41.415 BaseBdev1 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 BaseBdev2_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 true 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 [2024-11-26 17:22:11.388914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:41.415 [2024-11-26 17:22:11.389119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.415 [2024-11-26 17:22:11.389152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:41.415 [2024-11-26 17:22:11.389168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.415 [2024-11-26 17:22:11.391950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.415 [2024-11-26 17:22:11.391996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.415 BaseBdev2 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 BaseBdev3_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 true 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 [2024-11-26 17:22:11.474018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:41.415 [2024-11-26 17:22:11.474084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.415 [2024-11-26 17:22:11.474107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:41.415 [2024-11-26 17:22:11.474123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.415 [2024-11-26 17:22:11.476690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.415 [2024-11-26 17:22:11.476735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:41.415 BaseBdev3 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.415 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.416 [2024-11-26 17:22:11.486103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.416 [2024-11-26 17:22:11.488320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.416 [2024-11-26 17:22:11.488547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:41.416 [2024-11-26 17:22:11.488773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:41.416 [2024-11-26 17:22:11.488788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:41.416 [2024-11-26 17:22:11.489075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:25:41.416 [2024-11-26 17:22:11.489237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:41.416 [2024-11-26 17:22:11.489253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:41.416 [2024-11-26 17:22:11.489408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.416 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.675 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.675 "name": "raid_bdev1", 00:25:41.675 "uuid": "c25e77c7-30c1-46ca-b63f-b82ca8a32c84", 00:25:41.675 "strip_size_kb": 64, 00:25:41.675 "state": "online", 00:25:41.675 "raid_level": "concat", 00:25:41.675 "superblock": true, 00:25:41.675 "num_base_bdevs": 3, 00:25:41.675 "num_base_bdevs_discovered": 3, 00:25:41.675 "num_base_bdevs_operational": 3, 00:25:41.675 "base_bdevs_list": [ 00:25:41.675 { 00:25:41.675 "name": "BaseBdev1", 00:25:41.675 "uuid": "784629cc-8d1a-5f40-b691-18ec7234ccd4", 00:25:41.675 "is_configured": true, 00:25:41.675 "data_offset": 2048, 00:25:41.675 "data_size": 63488 00:25:41.675 }, 00:25:41.675 { 00:25:41.675 "name": "BaseBdev2", 00:25:41.675 "uuid": "847e054e-3d70-5ce5-b13d-2effc44e6e2c", 00:25:41.675 "is_configured": true, 00:25:41.675 "data_offset": 2048, 00:25:41.675 "data_size": 63488 00:25:41.675 }, 00:25:41.675 { 00:25:41.675 "name": "BaseBdev3", 00:25:41.675 "uuid": "9b042825-9244-50f7-8d43-097d6ce9790a", 00:25:41.675 "is_configured": true, 00:25:41.675 "data_offset": 2048, 00:25:41.675 "data_size": 63488 00:25:41.675 } 00:25:41.675 ] 00:25:41.675 }' 00:25:41.675 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.675 17:22:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.935 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:41.935 17:22:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:41.935 [2024-11-26 17:22:11.999247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.872 "name": "raid_bdev1", 00:25:42.872 "uuid": "c25e77c7-30c1-46ca-b63f-b82ca8a32c84", 00:25:42.872 "strip_size_kb": 64, 00:25:42.872 "state": "online", 00:25:42.872 "raid_level": "concat", 00:25:42.872 "superblock": true, 00:25:42.872 "num_base_bdevs": 3, 00:25:42.872 "num_base_bdevs_discovered": 3, 00:25:42.872 "num_base_bdevs_operational": 3, 00:25:42.872 "base_bdevs_list": [ 00:25:42.872 { 00:25:42.872 "name": "BaseBdev1", 00:25:42.872 "uuid": "784629cc-8d1a-5f40-b691-18ec7234ccd4", 00:25:42.872 "is_configured": true, 00:25:42.872 "data_offset": 2048, 00:25:42.872 "data_size": 63488 00:25:42.872 }, 00:25:42.872 { 00:25:42.872 "name": "BaseBdev2", 00:25:42.872 "uuid": "847e054e-3d70-5ce5-b13d-2effc44e6e2c", 00:25:42.872 "is_configured": true, 00:25:42.872 "data_offset": 2048, 00:25:42.872 "data_size": 63488 00:25:42.872 }, 00:25:42.872 { 00:25:42.872 "name": "BaseBdev3", 00:25:42.872 "uuid": "9b042825-9244-50f7-8d43-097d6ce9790a", 00:25:42.872 "is_configured": true, 00:25:42.872 "data_offset": 2048, 00:25:42.872 "data_size": 63488 00:25:42.872 } 00:25:42.872 ] 00:25:42.872 }' 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.872 17:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.459 [2024-11-26 17:22:13.324087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.459 [2024-11-26 17:22:13.324124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.459 [2024-11-26 17:22:13.326787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.459 [2024-11-26 17:22:13.326848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.459 [2024-11-26 17:22:13.326893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.459 [2024-11-26 17:22:13.326905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:43.459 { 00:25:43.459 "results": [ 00:25:43.459 { 00:25:43.459 "job": "raid_bdev1", 00:25:43.459 "core_mask": "0x1", 00:25:43.459 "workload": "randrw", 00:25:43.459 "percentage": 50, 00:25:43.459 "status": "finished", 00:25:43.459 "queue_depth": 1, 00:25:43.459 "io_size": 131072, 00:25:43.459 "runtime": 1.32461, 00:25:43.459 "iops": 15198.43576599905, 00:25:43.459 "mibps": 1899.8044707498811, 00:25:43.459 "io_failed": 1, 00:25:43.459 "io_timeout": 0, 00:25:43.459 "avg_latency_us": 91.50786530615582, 00:25:43.459 "min_latency_us": 26.730923694779115, 00:25:43.459 "max_latency_us": 1427.8425702811246 00:25:43.459 } 00:25:43.459 ], 00:25:43.459 "core_count": 1 00:25:43.459 } 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67204 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67204 ']' 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67204 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67204 00:25:43.459 killing process with pid 67204 00:25:43.459 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.460 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.460 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67204' 00:25:43.460 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67204 00:25:43.460 [2024-11-26 17:22:13.376992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.460 17:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67204 00:25:43.741 [2024-11-26 17:22:13.619163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cxCEvaJvlQ 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:25:45.119 00:25:45.119 real 0m4.592s 00:25:45.119 user 0m5.333s 00:25:45.119 sys 0m0.670s 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.119 ************************************ 00:25:45.119 END TEST raid_read_error_test 00:25:45.119 ************************************ 00:25:45.119 17:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.119 17:22:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:25:45.119 17:22:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:45.119 17:22:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.119 17:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:45.119 ************************************ 00:25:45.119 START TEST raid_write_error_test 00:25:45.119 ************************************ 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:45.119 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NLEIRM6hJG 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67344 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67344 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67344 ']' 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.120 17:22:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.120 [2024-11-26 17:22:15.056304] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:45.120 [2024-11-26 17:22:15.056659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67344 ] 00:25:45.378 [2024-11-26 17:22:15.239480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.378 [2024-11-26 17:22:15.383456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.637 [2024-11-26 17:22:15.606991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.637 [2024-11-26 17:22:15.607195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.895 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.895 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:45.895 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:45.895 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.896 BaseBdev1_malloc 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.896 true 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.896 [2024-11-26 17:22:15.969742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:45.896 [2024-11-26 17:22:15.969806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.896 [2024-11-26 17:22:15.969830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:45.896 [2024-11-26 17:22:15.969845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.896 [2024-11-26 17:22:15.972451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.896 [2024-11-26 17:22:15.972499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.896 BaseBdev1 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.896 17:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 BaseBdev2_malloc 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 true 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 [2024-11-26 17:22:16.030753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:46.154 [2024-11-26 17:22:16.030820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.154 [2024-11-26 17:22:16.030841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:46.154 [2024-11-26 17:22:16.030856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.154 [2024-11-26 17:22:16.033390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.154 [2024-11-26 17:22:16.033435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:46.154 BaseBdev2 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 BaseBdev3_malloc 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 true 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 [2024-11-26 17:22:16.107016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:46.154 [2024-11-26 17:22:16.107079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.154 [2024-11-26 17:22:16.107100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:46.154 [2024-11-26 17:22:16.107115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.154 [2024-11-26 17:22:16.110417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.154 [2024-11-26 17:22:16.110463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:46.154 BaseBdev3 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.154 [2024-11-26 17:22:16.115202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:46.154 [2024-11-26 17:22:16.117688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:46.154 [2024-11-26 17:22:16.117767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:46.154 [2024-11-26 17:22:16.117973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:46.154 [2024-11-26 17:22:16.117987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:46.154 [2024-11-26 17:22:16.118275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:25:46.154 [2024-11-26 17:22:16.118445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:46.154 [2024-11-26 17:22:16.118463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:46.154 [2024-11-26 17:22:16.118769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.154 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.155 "name": "raid_bdev1", 00:25:46.155 "uuid": "757d66f8-df89-46ea-a0c9-773d49184fda", 00:25:46.155 "strip_size_kb": 64, 00:25:46.155 "state": "online", 00:25:46.155 "raid_level": "concat", 00:25:46.155 "superblock": true, 00:25:46.155 "num_base_bdevs": 3, 00:25:46.155 "num_base_bdevs_discovered": 3, 00:25:46.155 "num_base_bdevs_operational": 3, 00:25:46.155 "base_bdevs_list": [ 00:25:46.155 { 00:25:46.155 "name": "BaseBdev1", 00:25:46.155 "uuid": "755d95af-dc88-546a-a5c0-22ea7b6cee1a", 00:25:46.155 "is_configured": true, 00:25:46.155 "data_offset": 2048, 00:25:46.155 "data_size": 63488 00:25:46.155 }, 00:25:46.155 { 00:25:46.155 "name": "BaseBdev2", 00:25:46.155 "uuid": "5060309d-b3d1-55c1-be3d-d08b2022af34", 00:25:46.155 "is_configured": true, 00:25:46.155 "data_offset": 2048, 00:25:46.155 "data_size": 63488 00:25:46.155 }, 00:25:46.155 { 00:25:46.155 "name": "BaseBdev3", 00:25:46.155 "uuid": "ced5e573-c787-5f76-b554-51a3aa71f91a", 00:25:46.155 "is_configured": true, 00:25:46.155 "data_offset": 2048, 00:25:46.155 "data_size": 63488 00:25:46.155 } 00:25:46.155 ] 00:25:46.155 }' 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.155 17:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.722 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:46.722 17:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:46.722 [2024-11-26 17:22:16.640057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:47.659 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.660 "name": "raid_bdev1", 00:25:47.660 "uuid": "757d66f8-df89-46ea-a0c9-773d49184fda", 00:25:47.660 "strip_size_kb": 64, 00:25:47.660 "state": "online", 00:25:47.660 "raid_level": "concat", 00:25:47.660 "superblock": true, 00:25:47.660 "num_base_bdevs": 3, 00:25:47.660 "num_base_bdevs_discovered": 3, 00:25:47.660 "num_base_bdevs_operational": 3, 00:25:47.660 "base_bdevs_list": [ 00:25:47.660 { 00:25:47.660 "name": "BaseBdev1", 00:25:47.660 "uuid": "755d95af-dc88-546a-a5c0-22ea7b6cee1a", 00:25:47.660 "is_configured": true, 00:25:47.660 "data_offset": 2048, 00:25:47.660 "data_size": 63488 00:25:47.660 }, 00:25:47.660 { 00:25:47.660 "name": "BaseBdev2", 00:25:47.660 "uuid": "5060309d-b3d1-55c1-be3d-d08b2022af34", 00:25:47.660 "is_configured": true, 00:25:47.660 "data_offset": 2048, 00:25:47.660 "data_size": 63488 00:25:47.660 }, 00:25:47.660 { 00:25:47.660 "name": "BaseBdev3", 00:25:47.660 "uuid": "ced5e573-c787-5f76-b554-51a3aa71f91a", 00:25:47.660 "is_configured": true, 00:25:47.660 "data_offset": 2048, 00:25:47.660 "data_size": 63488 00:25:47.660 } 00:25:47.660 ] 00:25:47.660 }' 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.660 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.919 [2024-11-26 17:22:17.991065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.919 [2024-11-26 17:22:17.991259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.919 [2024-11-26 17:22:17.994336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.919 [2024-11-26 17:22:17.994555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.919 [2024-11-26 17:22:17.994619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.919 [2024-11-26 17:22:17.994632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:47.919 { 00:25:47.919 "results": [ 00:25:47.919 { 00:25:47.919 "job": "raid_bdev1", 00:25:47.919 "core_mask": "0x1", 00:25:47.919 "workload": "randrw", 00:25:47.919 "percentage": 50, 00:25:47.919 "status": "finished", 00:25:47.919 "queue_depth": 1, 00:25:47.919 "io_size": 131072, 00:25:47.919 "runtime": 1.351102, 00:25:47.919 "iops": 14874.524647287917, 00:25:47.919 "mibps": 1859.3155809109896, 00:25:47.919 "io_failed": 1, 00:25:47.919 "io_timeout": 0, 00:25:47.919 "avg_latency_us": 93.54210377183928, 00:25:47.919 "min_latency_us": 27.142168674698794, 00:25:47.919 "max_latency_us": 1506.8016064257029 00:25:47.919 } 00:25:47.919 ], 00:25:47.919 "core_count": 1 00:25:47.919 } 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67344 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67344 ']' 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67344 00:25:47.919 17:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:25:47.919 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.919 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67344 00:25:48.178 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.178 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.178 killing process with pid 67344 00:25:48.178 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67344' 00:25:48.178 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67344 00:25:48.178 [2024-11-26 17:22:18.044874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:48.178 17:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67344 00:25:48.178 [2024-11-26 17:22:18.288863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NLEIRM6hJG 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:49.581 17:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:25:49.581 00:25:49.581 real 0m4.633s 00:25:49.581 user 0m5.427s 00:25:49.581 sys 0m0.644s 00:25:49.582 17:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.582 ************************************ 00:25:49.582 END TEST raid_write_error_test 00:25:49.582 ************************************ 00:25:49.582 17:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 17:22:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:49.582 17:22:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:25:49.582 17:22:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:49.582 17:22:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.582 17:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 ************************************ 00:25:49.582 START TEST raid_state_function_test 00:25:49.582 ************************************ 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67490 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:49.582 Process raid pid: 67490 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67490' 00:25:49.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67490 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67490 ']' 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.582 17:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.841 [2024-11-26 17:22:19.755398] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:25:49.842 [2024-11-26 17:22:19.755794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.842 [2024-11-26 17:22:19.945612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.101 [2024-11-26 17:22:20.098389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.360 [2024-11-26 17:22:20.342425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.360 [2024-11-26 17:22:20.342659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.619 [2024-11-26 17:22:20.626772] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.619 [2024-11-26 17:22:20.626844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.619 [2024-11-26 17:22:20.626858] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:50.619 [2024-11-26 17:22:20.626872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:50.619 [2024-11-26 17:22:20.626880] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:50.619 [2024-11-26 17:22:20.626894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.619 "name": "Existed_Raid", 00:25:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.619 "strip_size_kb": 0, 00:25:50.619 "state": "configuring", 00:25:50.619 "raid_level": "raid1", 00:25:50.619 "superblock": false, 00:25:50.619 "num_base_bdevs": 3, 00:25:50.619 "num_base_bdevs_discovered": 0, 00:25:50.619 "num_base_bdevs_operational": 3, 00:25:50.619 "base_bdevs_list": [ 00:25:50.619 { 00:25:50.619 "name": "BaseBdev1", 00:25:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.619 "is_configured": false, 00:25:50.619 "data_offset": 0, 00:25:50.619 "data_size": 0 00:25:50.619 }, 00:25:50.619 { 00:25:50.619 "name": "BaseBdev2", 00:25:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.619 "is_configured": false, 00:25:50.619 "data_offset": 0, 00:25:50.619 "data_size": 0 00:25:50.619 }, 00:25:50.619 { 00:25:50.619 "name": "BaseBdev3", 00:25:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.619 "is_configured": false, 00:25:50.619 "data_offset": 0, 00:25:50.619 "data_size": 0 00:25:50.619 } 00:25:50.619 ] 00:25:50.619 }' 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.619 17:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 [2024-11-26 17:22:21.058122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.187 [2024-11-26 17:22:21.058301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 [2024-11-26 17:22:21.070078] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:51.187 [2024-11-26 17:22:21.070137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:51.187 [2024-11-26 17:22:21.070149] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.187 [2024-11-26 17:22:21.070164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.187 [2024-11-26 17:22:21.070172] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:51.187 [2024-11-26 17:22:21.070186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 [2024-11-26 17:22:21.118941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.187 BaseBdev1 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.187 [ 00:25:51.187 { 00:25:51.187 "name": "BaseBdev1", 00:25:51.187 "aliases": [ 00:25:51.187 "43497f43-0632-49a3-a51a-d70d7b9e2a85" 00:25:51.187 ], 00:25:51.187 "product_name": "Malloc disk", 00:25:51.187 "block_size": 512, 00:25:51.187 "num_blocks": 65536, 00:25:51.187 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:51.187 "assigned_rate_limits": { 00:25:51.187 "rw_ios_per_sec": 0, 00:25:51.187 "rw_mbytes_per_sec": 0, 00:25:51.187 "r_mbytes_per_sec": 0, 00:25:51.187 "w_mbytes_per_sec": 0 00:25:51.187 }, 00:25:51.187 "claimed": true, 00:25:51.187 "claim_type": "exclusive_write", 00:25:51.187 "zoned": false, 00:25:51.187 "supported_io_types": { 00:25:51.187 "read": true, 00:25:51.187 "write": true, 00:25:51.187 "unmap": true, 00:25:51.187 "flush": true, 00:25:51.187 "reset": true, 00:25:51.187 "nvme_admin": false, 00:25:51.187 "nvme_io": false, 00:25:51.187 "nvme_io_md": false, 00:25:51.187 "write_zeroes": true, 00:25:51.187 "zcopy": true, 00:25:51.187 "get_zone_info": false, 00:25:51.187 "zone_management": false, 00:25:51.187 "zone_append": false, 00:25:51.187 "compare": false, 00:25:51.187 "compare_and_write": false, 00:25:51.187 "abort": true, 00:25:51.187 "seek_hole": false, 00:25:51.187 "seek_data": false, 00:25:51.187 "copy": true, 00:25:51.187 "nvme_iov_md": false 00:25:51.187 }, 00:25:51.187 "memory_domains": [ 00:25:51.187 { 00:25:51.187 "dma_device_id": "system", 00:25:51.187 "dma_device_type": 1 00:25:51.187 }, 00:25:51.187 { 00:25:51.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.187 "dma_device_type": 2 00:25:51.187 } 00:25:51.187 ], 00:25:51.187 "driver_specific": {} 00:25:51.187 } 00:25:51.187 ] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.187 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.188 "name": "Existed_Raid", 00:25:51.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.188 "strip_size_kb": 0, 00:25:51.188 "state": "configuring", 00:25:51.188 "raid_level": "raid1", 00:25:51.188 "superblock": false, 00:25:51.188 "num_base_bdevs": 3, 00:25:51.188 "num_base_bdevs_discovered": 1, 00:25:51.188 "num_base_bdevs_operational": 3, 00:25:51.188 "base_bdevs_list": [ 00:25:51.188 { 00:25:51.188 "name": "BaseBdev1", 00:25:51.188 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:51.188 "is_configured": true, 00:25:51.188 "data_offset": 0, 00:25:51.188 "data_size": 65536 00:25:51.188 }, 00:25:51.188 { 00:25:51.188 "name": "BaseBdev2", 00:25:51.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.188 "is_configured": false, 00:25:51.188 "data_offset": 0, 00:25:51.188 "data_size": 0 00:25:51.188 }, 00:25:51.188 { 00:25:51.188 "name": "BaseBdev3", 00:25:51.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.188 "is_configured": false, 00:25:51.188 "data_offset": 0, 00:25:51.188 "data_size": 0 00:25:51.188 } 00:25:51.188 ] 00:25:51.188 }' 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.188 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.756 [2024-11-26 17:22:21.618404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.756 [2024-11-26 17:22:21.618483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.756 [2024-11-26 17:22:21.626440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.756 [2024-11-26 17:22:21.629191] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.756 [2024-11-26 17:22:21.629246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.756 [2024-11-26 17:22:21.629260] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:51.756 [2024-11-26 17:22:21.629291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.756 "name": "Existed_Raid", 00:25:51.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.756 "strip_size_kb": 0, 00:25:51.756 "state": "configuring", 00:25:51.756 "raid_level": "raid1", 00:25:51.756 "superblock": false, 00:25:51.756 "num_base_bdevs": 3, 00:25:51.756 "num_base_bdevs_discovered": 1, 00:25:51.756 "num_base_bdevs_operational": 3, 00:25:51.756 "base_bdevs_list": [ 00:25:51.756 { 00:25:51.756 "name": "BaseBdev1", 00:25:51.756 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:51.756 "is_configured": true, 00:25:51.756 "data_offset": 0, 00:25:51.756 "data_size": 65536 00:25:51.756 }, 00:25:51.756 { 00:25:51.756 "name": "BaseBdev2", 00:25:51.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.756 "is_configured": false, 00:25:51.756 "data_offset": 0, 00:25:51.756 "data_size": 0 00:25:51.756 }, 00:25:51.756 { 00:25:51.756 "name": "BaseBdev3", 00:25:51.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.756 "is_configured": false, 00:25:51.756 "data_offset": 0, 00:25:51.756 "data_size": 0 00:25:51.756 } 00:25:51.756 ] 00:25:51.756 }' 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.756 17:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 [2024-11-26 17:22:22.089835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.015 BaseBdev2 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 [ 00:25:52.015 { 00:25:52.015 "name": "BaseBdev2", 00:25:52.015 "aliases": [ 00:25:52.015 "062f5385-a5be-4f5d-a3cd-74b435b0492f" 00:25:52.015 ], 00:25:52.015 "product_name": "Malloc disk", 00:25:52.015 "block_size": 512, 00:25:52.015 "num_blocks": 65536, 00:25:52.015 "uuid": "062f5385-a5be-4f5d-a3cd-74b435b0492f", 00:25:52.015 "assigned_rate_limits": { 00:25:52.015 "rw_ios_per_sec": 0, 00:25:52.015 "rw_mbytes_per_sec": 0, 00:25:52.015 "r_mbytes_per_sec": 0, 00:25:52.015 "w_mbytes_per_sec": 0 00:25:52.015 }, 00:25:52.015 "claimed": true, 00:25:52.015 "claim_type": "exclusive_write", 00:25:52.016 "zoned": false, 00:25:52.016 "supported_io_types": { 00:25:52.016 "read": true, 00:25:52.016 "write": true, 00:25:52.016 "unmap": true, 00:25:52.016 "flush": true, 00:25:52.016 "reset": true, 00:25:52.016 "nvme_admin": false, 00:25:52.016 "nvme_io": false, 00:25:52.274 "nvme_io_md": false, 00:25:52.274 "write_zeroes": true, 00:25:52.274 "zcopy": true, 00:25:52.274 "get_zone_info": false, 00:25:52.274 "zone_management": false, 00:25:52.274 "zone_append": false, 00:25:52.274 "compare": false, 00:25:52.274 "compare_and_write": false, 00:25:52.274 "abort": true, 00:25:52.274 "seek_hole": false, 00:25:52.274 "seek_data": false, 00:25:52.274 "copy": true, 00:25:52.274 "nvme_iov_md": false 00:25:52.274 }, 00:25:52.274 "memory_domains": [ 00:25:52.274 { 00:25:52.274 "dma_device_id": "system", 00:25:52.274 "dma_device_type": 1 00:25:52.274 }, 00:25:52.274 { 00:25:52.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.274 "dma_device_type": 2 00:25:52.274 } 00:25:52.274 ], 00:25:52.274 "driver_specific": {} 00:25:52.274 } 00:25:52.274 ] 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.274 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.275 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.275 "name": "Existed_Raid", 00:25:52.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.275 "strip_size_kb": 0, 00:25:52.275 "state": "configuring", 00:25:52.275 "raid_level": "raid1", 00:25:52.275 "superblock": false, 00:25:52.275 "num_base_bdevs": 3, 00:25:52.275 "num_base_bdevs_discovered": 2, 00:25:52.275 "num_base_bdevs_operational": 3, 00:25:52.275 "base_bdevs_list": [ 00:25:52.275 { 00:25:52.275 "name": "BaseBdev1", 00:25:52.275 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:52.275 "is_configured": true, 00:25:52.275 "data_offset": 0, 00:25:52.275 "data_size": 65536 00:25:52.275 }, 00:25:52.275 { 00:25:52.275 "name": "BaseBdev2", 00:25:52.275 "uuid": "062f5385-a5be-4f5d-a3cd-74b435b0492f", 00:25:52.275 "is_configured": true, 00:25:52.275 "data_offset": 0, 00:25:52.275 "data_size": 65536 00:25:52.275 }, 00:25:52.275 { 00:25:52.275 "name": "BaseBdev3", 00:25:52.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.275 "is_configured": false, 00:25:52.275 "data_offset": 0, 00:25:52.275 "data_size": 0 00:25:52.275 } 00:25:52.275 ] 00:25:52.275 }' 00:25:52.275 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.275 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.534 [2024-11-26 17:22:22.612757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:52.534 [2024-11-26 17:22:22.612824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:52.534 [2024-11-26 17:22:22.612841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:52.534 [2024-11-26 17:22:22.613166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:52.534 [2024-11-26 17:22:22.613363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:52.534 [2024-11-26 17:22:22.613374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:52.534 [2024-11-26 17:22:22.613718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.534 BaseBdev3 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.534 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.534 [ 00:25:52.534 { 00:25:52.534 "name": "BaseBdev3", 00:25:52.534 "aliases": [ 00:25:52.534 "b3329201-64c5-49c8-8e53-2b4cd51e1062" 00:25:52.534 ], 00:25:52.535 "product_name": "Malloc disk", 00:25:52.535 "block_size": 512, 00:25:52.535 "num_blocks": 65536, 00:25:52.535 "uuid": "b3329201-64c5-49c8-8e53-2b4cd51e1062", 00:25:52.535 "assigned_rate_limits": { 00:25:52.535 "rw_ios_per_sec": 0, 00:25:52.793 "rw_mbytes_per_sec": 0, 00:25:52.793 "r_mbytes_per_sec": 0, 00:25:52.793 "w_mbytes_per_sec": 0 00:25:52.793 }, 00:25:52.793 "claimed": true, 00:25:52.793 "claim_type": "exclusive_write", 00:25:52.793 "zoned": false, 00:25:52.793 "supported_io_types": { 00:25:52.793 "read": true, 00:25:52.793 "write": true, 00:25:52.793 "unmap": true, 00:25:52.793 "flush": true, 00:25:52.793 "reset": true, 00:25:52.793 "nvme_admin": false, 00:25:52.793 "nvme_io": false, 00:25:52.793 "nvme_io_md": false, 00:25:52.793 "write_zeroes": true, 00:25:52.793 "zcopy": true, 00:25:52.793 "get_zone_info": false, 00:25:52.793 "zone_management": false, 00:25:52.793 "zone_append": false, 00:25:52.793 "compare": false, 00:25:52.793 "compare_and_write": false, 00:25:52.793 "abort": true, 00:25:52.794 "seek_hole": false, 00:25:52.794 "seek_data": false, 00:25:52.794 "copy": true, 00:25:52.794 "nvme_iov_md": false 00:25:52.794 }, 00:25:52.794 "memory_domains": [ 00:25:52.794 { 00:25:52.794 "dma_device_id": "system", 00:25:52.794 "dma_device_type": 1 00:25:52.794 }, 00:25:52.794 { 00:25:52.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.794 "dma_device_type": 2 00:25:52.794 } 00:25:52.794 ], 00:25:52.794 "driver_specific": {} 00:25:52.794 } 00:25:52.794 ] 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.794 "name": "Existed_Raid", 00:25:52.794 "uuid": "16d4c11f-2d18-43f9-a853-cc721f4f6b96", 00:25:52.794 "strip_size_kb": 0, 00:25:52.794 "state": "online", 00:25:52.794 "raid_level": "raid1", 00:25:52.794 "superblock": false, 00:25:52.794 "num_base_bdevs": 3, 00:25:52.794 "num_base_bdevs_discovered": 3, 00:25:52.794 "num_base_bdevs_operational": 3, 00:25:52.794 "base_bdevs_list": [ 00:25:52.794 { 00:25:52.794 "name": "BaseBdev1", 00:25:52.794 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:52.794 "is_configured": true, 00:25:52.794 "data_offset": 0, 00:25:52.794 "data_size": 65536 00:25:52.794 }, 00:25:52.794 { 00:25:52.794 "name": "BaseBdev2", 00:25:52.794 "uuid": "062f5385-a5be-4f5d-a3cd-74b435b0492f", 00:25:52.794 "is_configured": true, 00:25:52.794 "data_offset": 0, 00:25:52.794 "data_size": 65536 00:25:52.794 }, 00:25:52.794 { 00:25:52.794 "name": "BaseBdev3", 00:25:52.794 "uuid": "b3329201-64c5-49c8-8e53-2b4cd51e1062", 00:25:52.794 "is_configured": true, 00:25:52.794 "data_offset": 0, 00:25:52.794 "data_size": 65536 00:25:52.794 } 00:25:52.794 ] 00:25:52.794 }' 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.794 17:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.052 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:53.052 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:53.052 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:53.053 [2024-11-26 17:22:23.108461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.053 "name": "Existed_Raid", 00:25:53.053 "aliases": [ 00:25:53.053 "16d4c11f-2d18-43f9-a853-cc721f4f6b96" 00:25:53.053 ], 00:25:53.053 "product_name": "Raid Volume", 00:25:53.053 "block_size": 512, 00:25:53.053 "num_blocks": 65536, 00:25:53.053 "uuid": "16d4c11f-2d18-43f9-a853-cc721f4f6b96", 00:25:53.053 "assigned_rate_limits": { 00:25:53.053 "rw_ios_per_sec": 0, 00:25:53.053 "rw_mbytes_per_sec": 0, 00:25:53.053 "r_mbytes_per_sec": 0, 00:25:53.053 "w_mbytes_per_sec": 0 00:25:53.053 }, 00:25:53.053 "claimed": false, 00:25:53.053 "zoned": false, 00:25:53.053 "supported_io_types": { 00:25:53.053 "read": true, 00:25:53.053 "write": true, 00:25:53.053 "unmap": false, 00:25:53.053 "flush": false, 00:25:53.053 "reset": true, 00:25:53.053 "nvme_admin": false, 00:25:53.053 "nvme_io": false, 00:25:53.053 "nvme_io_md": false, 00:25:53.053 "write_zeroes": true, 00:25:53.053 "zcopy": false, 00:25:53.053 "get_zone_info": false, 00:25:53.053 "zone_management": false, 00:25:53.053 "zone_append": false, 00:25:53.053 "compare": false, 00:25:53.053 "compare_and_write": false, 00:25:53.053 "abort": false, 00:25:53.053 "seek_hole": false, 00:25:53.053 "seek_data": false, 00:25:53.053 "copy": false, 00:25:53.053 "nvme_iov_md": false 00:25:53.053 }, 00:25:53.053 "memory_domains": [ 00:25:53.053 { 00:25:53.053 "dma_device_id": "system", 00:25:53.053 "dma_device_type": 1 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.053 "dma_device_type": 2 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "dma_device_id": "system", 00:25:53.053 "dma_device_type": 1 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.053 "dma_device_type": 2 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "dma_device_id": "system", 00:25:53.053 "dma_device_type": 1 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.053 "dma_device_type": 2 00:25:53.053 } 00:25:53.053 ], 00:25:53.053 "driver_specific": { 00:25:53.053 "raid": { 00:25:53.053 "uuid": "16d4c11f-2d18-43f9-a853-cc721f4f6b96", 00:25:53.053 "strip_size_kb": 0, 00:25:53.053 "state": "online", 00:25:53.053 "raid_level": "raid1", 00:25:53.053 "superblock": false, 00:25:53.053 "num_base_bdevs": 3, 00:25:53.053 "num_base_bdevs_discovered": 3, 00:25:53.053 "num_base_bdevs_operational": 3, 00:25:53.053 "base_bdevs_list": [ 00:25:53.053 { 00:25:53.053 "name": "BaseBdev1", 00:25:53.053 "uuid": "43497f43-0632-49a3-a51a-d70d7b9e2a85", 00:25:53.053 "is_configured": true, 00:25:53.053 "data_offset": 0, 00:25:53.053 "data_size": 65536 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "name": "BaseBdev2", 00:25:53.053 "uuid": "062f5385-a5be-4f5d-a3cd-74b435b0492f", 00:25:53.053 "is_configured": true, 00:25:53.053 "data_offset": 0, 00:25:53.053 "data_size": 65536 00:25:53.053 }, 00:25:53.053 { 00:25:53.053 "name": "BaseBdev3", 00:25:53.053 "uuid": "b3329201-64c5-49c8-8e53-2b4cd51e1062", 00:25:53.053 "is_configured": true, 00:25:53.053 "data_offset": 0, 00:25:53.053 "data_size": 65536 00:25:53.053 } 00:25:53.053 ] 00:25:53.053 } 00:25:53.053 } 00:25:53.053 }' 00:25:53.053 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:53.313 BaseBdev2 00:25:53.313 BaseBdev3' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.313 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.313 [2024-11-26 17:22:23.391817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.573 "name": "Existed_Raid", 00:25:53.573 "uuid": "16d4c11f-2d18-43f9-a853-cc721f4f6b96", 00:25:53.573 "strip_size_kb": 0, 00:25:53.573 "state": "online", 00:25:53.573 "raid_level": "raid1", 00:25:53.573 "superblock": false, 00:25:53.573 "num_base_bdevs": 3, 00:25:53.573 "num_base_bdevs_discovered": 2, 00:25:53.573 "num_base_bdevs_operational": 2, 00:25:53.573 "base_bdevs_list": [ 00:25:53.573 { 00:25:53.573 "name": null, 00:25:53.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.573 "is_configured": false, 00:25:53.573 "data_offset": 0, 00:25:53.573 "data_size": 65536 00:25:53.573 }, 00:25:53.573 { 00:25:53.573 "name": "BaseBdev2", 00:25:53.573 "uuid": "062f5385-a5be-4f5d-a3cd-74b435b0492f", 00:25:53.573 "is_configured": true, 00:25:53.573 "data_offset": 0, 00:25:53.573 "data_size": 65536 00:25:53.573 }, 00:25:53.573 { 00:25:53.573 "name": "BaseBdev3", 00:25:53.573 "uuid": "b3329201-64c5-49c8-8e53-2b4cd51e1062", 00:25:53.573 "is_configured": true, 00:25:53.573 "data_offset": 0, 00:25:53.573 "data_size": 65536 00:25:53.573 } 00:25:53.573 ] 00:25:53.573 }' 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.573 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.832 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.091 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:54.091 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:54.091 17:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:54.091 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.091 17:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.091 [2024-11-26 17:22:23.950743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.091 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.091 [2024-11-26 17:22:24.103996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:54.091 [2024-11-26 17:22:24.104115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.351 [2024-11-26 17:22:24.204914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.351 [2024-11-26 17:22:24.204974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.351 [2024-11-26 17:22:24.204991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 BaseBdev2 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 [ 00:25:54.351 { 00:25:54.351 "name": "BaseBdev2", 00:25:54.351 "aliases": [ 00:25:54.351 "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba" 00:25:54.351 ], 00:25:54.351 "product_name": "Malloc disk", 00:25:54.351 "block_size": 512, 00:25:54.351 "num_blocks": 65536, 00:25:54.351 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:54.351 "assigned_rate_limits": { 00:25:54.351 "rw_ios_per_sec": 0, 00:25:54.351 "rw_mbytes_per_sec": 0, 00:25:54.351 "r_mbytes_per_sec": 0, 00:25:54.351 "w_mbytes_per_sec": 0 00:25:54.351 }, 00:25:54.351 "claimed": false, 00:25:54.351 "zoned": false, 00:25:54.351 "supported_io_types": { 00:25:54.351 "read": true, 00:25:54.351 "write": true, 00:25:54.351 "unmap": true, 00:25:54.351 "flush": true, 00:25:54.351 "reset": true, 00:25:54.351 "nvme_admin": false, 00:25:54.351 "nvme_io": false, 00:25:54.351 "nvme_io_md": false, 00:25:54.351 "write_zeroes": true, 00:25:54.351 "zcopy": true, 00:25:54.351 "get_zone_info": false, 00:25:54.351 "zone_management": false, 00:25:54.351 "zone_append": false, 00:25:54.351 "compare": false, 00:25:54.351 "compare_and_write": false, 00:25:54.351 "abort": true, 00:25:54.351 "seek_hole": false, 00:25:54.351 "seek_data": false, 00:25:54.351 "copy": true, 00:25:54.351 "nvme_iov_md": false 00:25:54.351 }, 00:25:54.351 "memory_domains": [ 00:25:54.351 { 00:25:54.351 "dma_device_id": "system", 00:25:54.351 "dma_device_type": 1 00:25:54.351 }, 00:25:54.351 { 00:25:54.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.351 "dma_device_type": 2 00:25:54.351 } 00:25:54.351 ], 00:25:54.351 "driver_specific": {} 00:25:54.351 } 00:25:54.351 ] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 BaseBdev3 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.351 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.351 [ 00:25:54.351 { 00:25:54.351 "name": "BaseBdev3", 00:25:54.351 "aliases": [ 00:25:54.351 "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef" 00:25:54.351 ], 00:25:54.351 "product_name": "Malloc disk", 00:25:54.351 "block_size": 512, 00:25:54.351 "num_blocks": 65536, 00:25:54.351 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:54.351 "assigned_rate_limits": { 00:25:54.351 "rw_ios_per_sec": 0, 00:25:54.351 "rw_mbytes_per_sec": 0, 00:25:54.351 "r_mbytes_per_sec": 0, 00:25:54.351 "w_mbytes_per_sec": 0 00:25:54.351 }, 00:25:54.351 "claimed": false, 00:25:54.351 "zoned": false, 00:25:54.351 "supported_io_types": { 00:25:54.351 "read": true, 00:25:54.351 "write": true, 00:25:54.351 "unmap": true, 00:25:54.351 "flush": true, 00:25:54.351 "reset": true, 00:25:54.351 "nvme_admin": false, 00:25:54.351 "nvme_io": false, 00:25:54.351 "nvme_io_md": false, 00:25:54.351 "write_zeroes": true, 00:25:54.351 "zcopy": true, 00:25:54.351 "get_zone_info": false, 00:25:54.351 "zone_management": false, 00:25:54.351 "zone_append": false, 00:25:54.351 "compare": false, 00:25:54.351 "compare_and_write": false, 00:25:54.351 "abort": true, 00:25:54.351 "seek_hole": false, 00:25:54.351 "seek_data": false, 00:25:54.351 "copy": true, 00:25:54.351 "nvme_iov_md": false 00:25:54.351 }, 00:25:54.352 "memory_domains": [ 00:25:54.352 { 00:25:54.352 "dma_device_id": "system", 00:25:54.352 "dma_device_type": 1 00:25:54.352 }, 00:25:54.352 { 00:25:54.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.352 "dma_device_type": 2 00:25:54.352 } 00:25:54.352 ], 00:25:54.352 "driver_specific": {} 00:25:54.352 } 00:25:54.352 ] 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.352 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.352 [2024-11-26 17:22:24.461947] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:54.352 [2024-11-26 17:22:24.462009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:54.352 [2024-11-26 17:22:24.462037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:54.611 [2024-11-26 17:22:24.464461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.611 "name": "Existed_Raid", 00:25:54.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.611 "strip_size_kb": 0, 00:25:54.611 "state": "configuring", 00:25:54.611 "raid_level": "raid1", 00:25:54.611 "superblock": false, 00:25:54.611 "num_base_bdevs": 3, 00:25:54.611 "num_base_bdevs_discovered": 2, 00:25:54.611 "num_base_bdevs_operational": 3, 00:25:54.611 "base_bdevs_list": [ 00:25:54.611 { 00:25:54.611 "name": "BaseBdev1", 00:25:54.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.611 "is_configured": false, 00:25:54.611 "data_offset": 0, 00:25:54.611 "data_size": 0 00:25:54.611 }, 00:25:54.611 { 00:25:54.611 "name": "BaseBdev2", 00:25:54.611 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:54.611 "is_configured": true, 00:25:54.611 "data_offset": 0, 00:25:54.611 "data_size": 65536 00:25:54.611 }, 00:25:54.611 { 00:25:54.611 "name": "BaseBdev3", 00:25:54.611 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:54.611 "is_configured": true, 00:25:54.611 "data_offset": 0, 00:25:54.611 "data_size": 65536 00:25:54.611 } 00:25:54.611 ] 00:25:54.611 }' 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.611 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.870 [2024-11-26 17:22:24.941673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.870 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.139 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.139 "name": "Existed_Raid", 00:25:55.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.139 "strip_size_kb": 0, 00:25:55.139 "state": "configuring", 00:25:55.139 "raid_level": "raid1", 00:25:55.139 "superblock": false, 00:25:55.139 "num_base_bdevs": 3, 00:25:55.139 "num_base_bdevs_discovered": 1, 00:25:55.139 "num_base_bdevs_operational": 3, 00:25:55.139 "base_bdevs_list": [ 00:25:55.139 { 00:25:55.139 "name": "BaseBdev1", 00:25:55.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.139 "is_configured": false, 00:25:55.139 "data_offset": 0, 00:25:55.139 "data_size": 0 00:25:55.139 }, 00:25:55.139 { 00:25:55.139 "name": null, 00:25:55.139 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:55.139 "is_configured": false, 00:25:55.139 "data_offset": 0, 00:25:55.139 "data_size": 65536 00:25:55.139 }, 00:25:55.139 { 00:25:55.139 "name": "BaseBdev3", 00:25:55.139 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:55.139 "is_configured": true, 00:25:55.139 "data_offset": 0, 00:25:55.139 "data_size": 65536 00:25:55.139 } 00:25:55.139 ] 00:25:55.139 }' 00:25:55.139 17:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.139 17:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.415 [2024-11-26 17:22:25.461179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.415 BaseBdev1 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.415 [ 00:25:55.415 { 00:25:55.415 "name": "BaseBdev1", 00:25:55.415 "aliases": [ 00:25:55.415 "60dd1e7f-674f-4c69-af55-847f3e927751" 00:25:55.415 ], 00:25:55.415 "product_name": "Malloc disk", 00:25:55.415 "block_size": 512, 00:25:55.415 "num_blocks": 65536, 00:25:55.415 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:55.415 "assigned_rate_limits": { 00:25:55.415 "rw_ios_per_sec": 0, 00:25:55.415 "rw_mbytes_per_sec": 0, 00:25:55.415 "r_mbytes_per_sec": 0, 00:25:55.415 "w_mbytes_per_sec": 0 00:25:55.415 }, 00:25:55.415 "claimed": true, 00:25:55.415 "claim_type": "exclusive_write", 00:25:55.415 "zoned": false, 00:25:55.415 "supported_io_types": { 00:25:55.415 "read": true, 00:25:55.415 "write": true, 00:25:55.415 "unmap": true, 00:25:55.415 "flush": true, 00:25:55.415 "reset": true, 00:25:55.415 "nvme_admin": false, 00:25:55.415 "nvme_io": false, 00:25:55.415 "nvme_io_md": false, 00:25:55.415 "write_zeroes": true, 00:25:55.415 "zcopy": true, 00:25:55.415 "get_zone_info": false, 00:25:55.415 "zone_management": false, 00:25:55.415 "zone_append": false, 00:25:55.415 "compare": false, 00:25:55.415 "compare_and_write": false, 00:25:55.415 "abort": true, 00:25:55.415 "seek_hole": false, 00:25:55.415 "seek_data": false, 00:25:55.415 "copy": true, 00:25:55.415 "nvme_iov_md": false 00:25:55.415 }, 00:25:55.415 "memory_domains": [ 00:25:55.415 { 00:25:55.415 "dma_device_id": "system", 00:25:55.415 "dma_device_type": 1 00:25:55.415 }, 00:25:55.415 { 00:25:55.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.415 "dma_device_type": 2 00:25:55.415 } 00:25:55.415 ], 00:25:55.415 "driver_specific": {} 00:25:55.415 } 00:25:55.415 ] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.415 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.416 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.675 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.675 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.675 "name": "Existed_Raid", 00:25:55.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.675 "strip_size_kb": 0, 00:25:55.675 "state": "configuring", 00:25:55.675 "raid_level": "raid1", 00:25:55.675 "superblock": false, 00:25:55.675 "num_base_bdevs": 3, 00:25:55.675 "num_base_bdevs_discovered": 2, 00:25:55.675 "num_base_bdevs_operational": 3, 00:25:55.675 "base_bdevs_list": [ 00:25:55.675 { 00:25:55.675 "name": "BaseBdev1", 00:25:55.675 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:55.675 "is_configured": true, 00:25:55.675 "data_offset": 0, 00:25:55.675 "data_size": 65536 00:25:55.675 }, 00:25:55.675 { 00:25:55.675 "name": null, 00:25:55.675 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:55.675 "is_configured": false, 00:25:55.675 "data_offset": 0, 00:25:55.675 "data_size": 65536 00:25:55.675 }, 00:25:55.675 { 00:25:55.675 "name": "BaseBdev3", 00:25:55.675 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:55.675 "is_configured": true, 00:25:55.675 "data_offset": 0, 00:25:55.675 "data_size": 65536 00:25:55.675 } 00:25:55.675 ] 00:25:55.675 }' 00:25:55.675 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.675 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.934 [2024-11-26 17:22:25.956639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.934 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.935 "name": "Existed_Raid", 00:25:55.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.935 "strip_size_kb": 0, 00:25:55.935 "state": "configuring", 00:25:55.935 "raid_level": "raid1", 00:25:55.935 "superblock": false, 00:25:55.935 "num_base_bdevs": 3, 00:25:55.935 "num_base_bdevs_discovered": 1, 00:25:55.935 "num_base_bdevs_operational": 3, 00:25:55.935 "base_bdevs_list": [ 00:25:55.935 { 00:25:55.935 "name": "BaseBdev1", 00:25:55.935 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:55.935 "is_configured": true, 00:25:55.935 "data_offset": 0, 00:25:55.935 "data_size": 65536 00:25:55.935 }, 00:25:55.935 { 00:25:55.935 "name": null, 00:25:55.935 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:55.935 "is_configured": false, 00:25:55.935 "data_offset": 0, 00:25:55.935 "data_size": 65536 00:25:55.935 }, 00:25:55.935 { 00:25:55.935 "name": null, 00:25:55.935 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:55.935 "is_configured": false, 00:25:55.935 "data_offset": 0, 00:25:55.935 "data_size": 65536 00:25:55.935 } 00:25:55.935 ] 00:25:55.935 }' 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.935 17:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 [2024-11-26 17:22:26.424003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.503 "name": "Existed_Raid", 00:25:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.503 "strip_size_kb": 0, 00:25:56.503 "state": "configuring", 00:25:56.503 "raid_level": "raid1", 00:25:56.503 "superblock": false, 00:25:56.503 "num_base_bdevs": 3, 00:25:56.503 "num_base_bdevs_discovered": 2, 00:25:56.503 "num_base_bdevs_operational": 3, 00:25:56.503 "base_bdevs_list": [ 00:25:56.503 { 00:25:56.503 "name": "BaseBdev1", 00:25:56.503 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:56.503 "is_configured": true, 00:25:56.503 "data_offset": 0, 00:25:56.503 "data_size": 65536 00:25:56.503 }, 00:25:56.503 { 00:25:56.503 "name": null, 00:25:56.503 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:56.503 "is_configured": false, 00:25:56.503 "data_offset": 0, 00:25:56.503 "data_size": 65536 00:25:56.503 }, 00:25:56.503 { 00:25:56.503 "name": "BaseBdev3", 00:25:56.503 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:56.503 "is_configured": true, 00:25:56.503 "data_offset": 0, 00:25:56.503 "data_size": 65536 00:25:56.503 } 00:25:56.503 ] 00:25:56.503 }' 00:25:56.503 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.504 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.073 17:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.073 [2024-11-26 17:22:26.939326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.073 "name": "Existed_Raid", 00:25:57.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.073 "strip_size_kb": 0, 00:25:57.073 "state": "configuring", 00:25:57.073 "raid_level": "raid1", 00:25:57.073 "superblock": false, 00:25:57.073 "num_base_bdevs": 3, 00:25:57.073 "num_base_bdevs_discovered": 1, 00:25:57.073 "num_base_bdevs_operational": 3, 00:25:57.073 "base_bdevs_list": [ 00:25:57.073 { 00:25:57.073 "name": null, 00:25:57.073 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:57.073 "is_configured": false, 00:25:57.073 "data_offset": 0, 00:25:57.073 "data_size": 65536 00:25:57.073 }, 00:25:57.073 { 00:25:57.073 "name": null, 00:25:57.073 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:57.073 "is_configured": false, 00:25:57.073 "data_offset": 0, 00:25:57.073 "data_size": 65536 00:25:57.073 }, 00:25:57.073 { 00:25:57.073 "name": "BaseBdev3", 00:25:57.073 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:57.073 "is_configured": true, 00:25:57.073 "data_offset": 0, 00:25:57.073 "data_size": 65536 00:25:57.073 } 00:25:57.073 ] 00:25:57.073 }' 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.073 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.641 [2024-11-26 17:22:27.556632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.641 "name": "Existed_Raid", 00:25:57.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.641 "strip_size_kb": 0, 00:25:57.641 "state": "configuring", 00:25:57.641 "raid_level": "raid1", 00:25:57.641 "superblock": false, 00:25:57.641 "num_base_bdevs": 3, 00:25:57.641 "num_base_bdevs_discovered": 2, 00:25:57.641 "num_base_bdevs_operational": 3, 00:25:57.641 "base_bdevs_list": [ 00:25:57.641 { 00:25:57.641 "name": null, 00:25:57.641 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:57.641 "is_configured": false, 00:25:57.641 "data_offset": 0, 00:25:57.641 "data_size": 65536 00:25:57.641 }, 00:25:57.641 { 00:25:57.641 "name": "BaseBdev2", 00:25:57.641 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:57.641 "is_configured": true, 00:25:57.641 "data_offset": 0, 00:25:57.641 "data_size": 65536 00:25:57.641 }, 00:25:57.641 { 00:25:57.641 "name": "BaseBdev3", 00:25:57.641 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:57.641 "is_configured": true, 00:25:57.641 "data_offset": 0, 00:25:57.641 "data_size": 65536 00:25:57.641 } 00:25:57.641 ] 00:25:57.641 }' 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.641 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.900 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.900 17:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:57.900 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.900 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.900 17:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.158 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 60dd1e7f-674f-4c69-af55-847f3e927751 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.159 [2024-11-26 17:22:28.104546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:58.159 [2024-11-26 17:22:28.104622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:58.159 [2024-11-26 17:22:28.104632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:58.159 [2024-11-26 17:22:28.104932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:58.159 [2024-11-26 17:22:28.105102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:58.159 [2024-11-26 17:22:28.105116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:58.159 [2024-11-26 17:22:28.105414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.159 NewBaseBdev 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.159 [ 00:25:58.159 { 00:25:58.159 "name": "NewBaseBdev", 00:25:58.159 "aliases": [ 00:25:58.159 "60dd1e7f-674f-4c69-af55-847f3e927751" 00:25:58.159 ], 00:25:58.159 "product_name": "Malloc disk", 00:25:58.159 "block_size": 512, 00:25:58.159 "num_blocks": 65536, 00:25:58.159 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:58.159 "assigned_rate_limits": { 00:25:58.159 "rw_ios_per_sec": 0, 00:25:58.159 "rw_mbytes_per_sec": 0, 00:25:58.159 "r_mbytes_per_sec": 0, 00:25:58.159 "w_mbytes_per_sec": 0 00:25:58.159 }, 00:25:58.159 "claimed": true, 00:25:58.159 "claim_type": "exclusive_write", 00:25:58.159 "zoned": false, 00:25:58.159 "supported_io_types": { 00:25:58.159 "read": true, 00:25:58.159 "write": true, 00:25:58.159 "unmap": true, 00:25:58.159 "flush": true, 00:25:58.159 "reset": true, 00:25:58.159 "nvme_admin": false, 00:25:58.159 "nvme_io": false, 00:25:58.159 "nvme_io_md": false, 00:25:58.159 "write_zeroes": true, 00:25:58.159 "zcopy": true, 00:25:58.159 "get_zone_info": false, 00:25:58.159 "zone_management": false, 00:25:58.159 "zone_append": false, 00:25:58.159 "compare": false, 00:25:58.159 "compare_and_write": false, 00:25:58.159 "abort": true, 00:25:58.159 "seek_hole": false, 00:25:58.159 "seek_data": false, 00:25:58.159 "copy": true, 00:25:58.159 "nvme_iov_md": false 00:25:58.159 }, 00:25:58.159 "memory_domains": [ 00:25:58.159 { 00:25:58.159 "dma_device_id": "system", 00:25:58.159 "dma_device_type": 1 00:25:58.159 }, 00:25:58.159 { 00:25:58.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.159 "dma_device_type": 2 00:25:58.159 } 00:25:58.159 ], 00:25:58.159 "driver_specific": {} 00:25:58.159 } 00:25:58.159 ] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.159 "name": "Existed_Raid", 00:25:58.159 "uuid": "dacc8e39-cd19-4373-bdcc-9eea9db1c631", 00:25:58.159 "strip_size_kb": 0, 00:25:58.159 "state": "online", 00:25:58.159 "raid_level": "raid1", 00:25:58.159 "superblock": false, 00:25:58.159 "num_base_bdevs": 3, 00:25:58.159 "num_base_bdevs_discovered": 3, 00:25:58.159 "num_base_bdevs_operational": 3, 00:25:58.159 "base_bdevs_list": [ 00:25:58.159 { 00:25:58.159 "name": "NewBaseBdev", 00:25:58.159 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:58.159 "is_configured": true, 00:25:58.159 "data_offset": 0, 00:25:58.159 "data_size": 65536 00:25:58.159 }, 00:25:58.159 { 00:25:58.159 "name": "BaseBdev2", 00:25:58.159 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:58.159 "is_configured": true, 00:25:58.159 "data_offset": 0, 00:25:58.159 "data_size": 65536 00:25:58.159 }, 00:25:58.159 { 00:25:58.159 "name": "BaseBdev3", 00:25:58.159 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:58.159 "is_configured": true, 00:25:58.159 "data_offset": 0, 00:25:58.159 "data_size": 65536 00:25:58.159 } 00:25:58.159 ] 00:25:58.159 }' 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.159 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:58.725 [2024-11-26 17:22:28.572301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.725 "name": "Existed_Raid", 00:25:58.725 "aliases": [ 00:25:58.725 "dacc8e39-cd19-4373-bdcc-9eea9db1c631" 00:25:58.725 ], 00:25:58.725 "product_name": "Raid Volume", 00:25:58.725 "block_size": 512, 00:25:58.725 "num_blocks": 65536, 00:25:58.725 "uuid": "dacc8e39-cd19-4373-bdcc-9eea9db1c631", 00:25:58.725 "assigned_rate_limits": { 00:25:58.725 "rw_ios_per_sec": 0, 00:25:58.725 "rw_mbytes_per_sec": 0, 00:25:58.725 "r_mbytes_per_sec": 0, 00:25:58.725 "w_mbytes_per_sec": 0 00:25:58.725 }, 00:25:58.725 "claimed": false, 00:25:58.725 "zoned": false, 00:25:58.725 "supported_io_types": { 00:25:58.725 "read": true, 00:25:58.725 "write": true, 00:25:58.725 "unmap": false, 00:25:58.725 "flush": false, 00:25:58.725 "reset": true, 00:25:58.725 "nvme_admin": false, 00:25:58.725 "nvme_io": false, 00:25:58.725 "nvme_io_md": false, 00:25:58.725 "write_zeroes": true, 00:25:58.725 "zcopy": false, 00:25:58.725 "get_zone_info": false, 00:25:58.725 "zone_management": false, 00:25:58.725 "zone_append": false, 00:25:58.725 "compare": false, 00:25:58.725 "compare_and_write": false, 00:25:58.725 "abort": false, 00:25:58.725 "seek_hole": false, 00:25:58.725 "seek_data": false, 00:25:58.725 "copy": false, 00:25:58.725 "nvme_iov_md": false 00:25:58.725 }, 00:25:58.725 "memory_domains": [ 00:25:58.725 { 00:25:58.725 "dma_device_id": "system", 00:25:58.725 "dma_device_type": 1 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.725 "dma_device_type": 2 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "dma_device_id": "system", 00:25:58.725 "dma_device_type": 1 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.725 "dma_device_type": 2 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "dma_device_id": "system", 00:25:58.725 "dma_device_type": 1 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.725 "dma_device_type": 2 00:25:58.725 } 00:25:58.725 ], 00:25:58.725 "driver_specific": { 00:25:58.725 "raid": { 00:25:58.725 "uuid": "dacc8e39-cd19-4373-bdcc-9eea9db1c631", 00:25:58.725 "strip_size_kb": 0, 00:25:58.725 "state": "online", 00:25:58.725 "raid_level": "raid1", 00:25:58.725 "superblock": false, 00:25:58.725 "num_base_bdevs": 3, 00:25:58.725 "num_base_bdevs_discovered": 3, 00:25:58.725 "num_base_bdevs_operational": 3, 00:25:58.725 "base_bdevs_list": [ 00:25:58.725 { 00:25:58.725 "name": "NewBaseBdev", 00:25:58.725 "uuid": "60dd1e7f-674f-4c69-af55-847f3e927751", 00:25:58.725 "is_configured": true, 00:25:58.725 "data_offset": 0, 00:25:58.725 "data_size": 65536 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "name": "BaseBdev2", 00:25:58.725 "uuid": "b2ea8ed4-6c0d-41cd-b7ef-51a3a69a89ba", 00:25:58.725 "is_configured": true, 00:25:58.725 "data_offset": 0, 00:25:58.725 "data_size": 65536 00:25:58.725 }, 00:25:58.725 { 00:25:58.725 "name": "BaseBdev3", 00:25:58.725 "uuid": "eb01cccc-ec8d-4a41-b7e5-3d2b319740ef", 00:25:58.725 "is_configured": true, 00:25:58.725 "data_offset": 0, 00:25:58.725 "data_size": 65536 00:25:58.725 } 00:25:58.725 ] 00:25:58.725 } 00:25:58.725 } 00:25:58.725 }' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:58.725 BaseBdev2 00:25:58.725 BaseBdev3' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.725 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.726 [2024-11-26 17:22:28.823642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:58.726 [2024-11-26 17:22:28.823688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:58.726 [2024-11-26 17:22:28.823810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:58.726 [2024-11-26 17:22:28.824146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:58.726 [2024-11-26 17:22:28.824162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67490 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67490 ']' 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67490 00:25:58.726 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:58.982 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.982 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67490 00:25:58.983 killing process with pid 67490 00:25:58.983 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.983 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.983 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67490' 00:25:58.983 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67490 00:25:58.983 [2024-11-26 17:22:28.879603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:58.983 17:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67490 00:25:59.240 [2024-11-26 17:22:29.201452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:00.619 00:26:00.619 real 0m10.808s 00:26:00.619 user 0m16.947s 00:26:00.619 sys 0m2.188s 00:26:00.619 ************************************ 00:26:00.619 END TEST raid_state_function_test 00:26:00.619 ************************************ 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.619 17:22:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:26:00.619 17:22:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:00.619 17:22:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.619 17:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:00.619 ************************************ 00:26:00.619 START TEST raid_state_function_test_sb 00:26:00.619 ************************************ 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:00.619 Process raid pid: 68111 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68111 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68111' 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68111 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68111 ']' 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.619 17:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.619 [2024-11-26 17:22:30.636061] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:00.619 [2024-11-26 17:22:30.636359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.905 [2024-11-26 17:22:30.823923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.172 [2024-11-26 17:22:31.023427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.172 [2024-11-26 17:22:31.271658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.172 [2024-11-26 17:22:31.271716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.431 [2024-11-26 17:22:31.527977] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:01.431 [2024-11-26 17:22:31.528056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:01.431 [2024-11-26 17:22:31.528078] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:01.431 [2024-11-26 17:22:31.528093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:01.431 [2024-11-26 17:22:31.528102] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:01.431 [2024-11-26 17:22:31.528116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.431 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.690 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.690 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.690 "name": "Existed_Raid", 00:26:01.690 "uuid": "28373776-09fe-486c-acca-88f3cfed5aab", 00:26:01.690 "strip_size_kb": 0, 00:26:01.690 "state": "configuring", 00:26:01.690 "raid_level": "raid1", 00:26:01.690 "superblock": true, 00:26:01.690 "num_base_bdevs": 3, 00:26:01.690 "num_base_bdevs_discovered": 0, 00:26:01.690 "num_base_bdevs_operational": 3, 00:26:01.690 "base_bdevs_list": [ 00:26:01.690 { 00:26:01.690 "name": "BaseBdev1", 00:26:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.690 "is_configured": false, 00:26:01.690 "data_offset": 0, 00:26:01.690 "data_size": 0 00:26:01.690 }, 00:26:01.690 { 00:26:01.690 "name": "BaseBdev2", 00:26:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.690 "is_configured": false, 00:26:01.690 "data_offset": 0, 00:26:01.690 "data_size": 0 00:26:01.690 }, 00:26:01.690 { 00:26:01.690 "name": "BaseBdev3", 00:26:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.690 "is_configured": false, 00:26:01.690 "data_offset": 0, 00:26:01.690 "data_size": 0 00:26:01.690 } 00:26:01.690 ] 00:26:01.690 }' 00:26:01.690 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.690 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.948 [2024-11-26 17:22:31.963294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:01.948 [2024-11-26 17:22:31.963480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.948 [2024-11-26 17:22:31.975280] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:01.948 [2024-11-26 17:22:31.975338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:01.948 [2024-11-26 17:22:31.975350] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:01.948 [2024-11-26 17:22:31.975364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:01.948 [2024-11-26 17:22:31.975373] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:01.948 [2024-11-26 17:22:31.975387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.948 17:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.948 [2024-11-26 17:22:32.029851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:01.948 BaseBdev1 00:26:01.948 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.949 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.949 [ 00:26:01.949 { 00:26:01.949 "name": "BaseBdev1", 00:26:01.949 "aliases": [ 00:26:01.949 "19e3db31-2fee-44a9-bddf-f23e0fc4ac45" 00:26:01.949 ], 00:26:01.949 "product_name": "Malloc disk", 00:26:01.949 "block_size": 512, 00:26:01.949 "num_blocks": 65536, 00:26:01.949 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:01.949 "assigned_rate_limits": { 00:26:01.949 "rw_ios_per_sec": 0, 00:26:02.207 "rw_mbytes_per_sec": 0, 00:26:02.207 "r_mbytes_per_sec": 0, 00:26:02.207 "w_mbytes_per_sec": 0 00:26:02.207 }, 00:26:02.207 "claimed": true, 00:26:02.207 "claim_type": "exclusive_write", 00:26:02.207 "zoned": false, 00:26:02.207 "supported_io_types": { 00:26:02.207 "read": true, 00:26:02.207 "write": true, 00:26:02.207 "unmap": true, 00:26:02.207 "flush": true, 00:26:02.208 "reset": true, 00:26:02.208 "nvme_admin": false, 00:26:02.208 "nvme_io": false, 00:26:02.208 "nvme_io_md": false, 00:26:02.208 "write_zeroes": true, 00:26:02.208 "zcopy": true, 00:26:02.208 "get_zone_info": false, 00:26:02.208 "zone_management": false, 00:26:02.208 "zone_append": false, 00:26:02.208 "compare": false, 00:26:02.208 "compare_and_write": false, 00:26:02.208 "abort": true, 00:26:02.208 "seek_hole": false, 00:26:02.208 "seek_data": false, 00:26:02.208 "copy": true, 00:26:02.208 "nvme_iov_md": false 00:26:02.208 }, 00:26:02.208 "memory_domains": [ 00:26:02.208 { 00:26:02.208 "dma_device_id": "system", 00:26:02.208 "dma_device_type": 1 00:26:02.208 }, 00:26:02.208 { 00:26:02.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.208 "dma_device_type": 2 00:26:02.208 } 00:26:02.208 ], 00:26:02.208 "driver_specific": {} 00:26:02.208 } 00:26:02.208 ] 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.208 "name": "Existed_Raid", 00:26:02.208 "uuid": "28b783b6-e625-4354-8f9b-dc7fcd5b9a3f", 00:26:02.208 "strip_size_kb": 0, 00:26:02.208 "state": "configuring", 00:26:02.208 "raid_level": "raid1", 00:26:02.208 "superblock": true, 00:26:02.208 "num_base_bdevs": 3, 00:26:02.208 "num_base_bdevs_discovered": 1, 00:26:02.208 "num_base_bdevs_operational": 3, 00:26:02.208 "base_bdevs_list": [ 00:26:02.208 { 00:26:02.208 "name": "BaseBdev1", 00:26:02.208 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:02.208 "is_configured": true, 00:26:02.208 "data_offset": 2048, 00:26:02.208 "data_size": 63488 00:26:02.208 }, 00:26:02.208 { 00:26:02.208 "name": "BaseBdev2", 00:26:02.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.208 "is_configured": false, 00:26:02.208 "data_offset": 0, 00:26:02.208 "data_size": 0 00:26:02.208 }, 00:26:02.208 { 00:26:02.208 "name": "BaseBdev3", 00:26:02.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.208 "is_configured": false, 00:26:02.208 "data_offset": 0, 00:26:02.208 "data_size": 0 00:26:02.208 } 00:26:02.208 ] 00:26:02.208 }' 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.208 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.467 [2024-11-26 17:22:32.537660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:02.467 [2024-11-26 17:22:32.537734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.467 [2024-11-26 17:22:32.549740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.467 [2024-11-26 17:22:32.552214] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.467 [2024-11-26 17:22:32.552270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.467 [2024-11-26 17:22:32.552284] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:02.467 [2024-11-26 17:22:32.552297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.467 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.725 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.725 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.725 "name": "Existed_Raid", 00:26:02.725 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:02.725 "strip_size_kb": 0, 00:26:02.725 "state": "configuring", 00:26:02.725 "raid_level": "raid1", 00:26:02.725 "superblock": true, 00:26:02.725 "num_base_bdevs": 3, 00:26:02.725 "num_base_bdevs_discovered": 1, 00:26:02.725 "num_base_bdevs_operational": 3, 00:26:02.725 "base_bdevs_list": [ 00:26:02.725 { 00:26:02.725 "name": "BaseBdev1", 00:26:02.725 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:02.725 "is_configured": true, 00:26:02.725 "data_offset": 2048, 00:26:02.725 "data_size": 63488 00:26:02.725 }, 00:26:02.725 { 00:26:02.725 "name": "BaseBdev2", 00:26:02.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.725 "is_configured": false, 00:26:02.725 "data_offset": 0, 00:26:02.725 "data_size": 0 00:26:02.725 }, 00:26:02.725 { 00:26:02.725 "name": "BaseBdev3", 00:26:02.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.725 "is_configured": false, 00:26:02.725 "data_offset": 0, 00:26:02.725 "data_size": 0 00:26:02.725 } 00:26:02.725 ] 00:26:02.725 }' 00:26:02.725 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.725 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.983 17:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:02.983 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.983 17:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.983 [2024-11-26 17:22:33.034327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:02.983 BaseBdev2 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.983 [ 00:26:02.983 { 00:26:02.983 "name": "BaseBdev2", 00:26:02.983 "aliases": [ 00:26:02.983 "73f3c305-0029-4472-981e-0a5590587969" 00:26:02.983 ], 00:26:02.983 "product_name": "Malloc disk", 00:26:02.983 "block_size": 512, 00:26:02.983 "num_blocks": 65536, 00:26:02.983 "uuid": "73f3c305-0029-4472-981e-0a5590587969", 00:26:02.983 "assigned_rate_limits": { 00:26:02.983 "rw_ios_per_sec": 0, 00:26:02.983 "rw_mbytes_per_sec": 0, 00:26:02.983 "r_mbytes_per_sec": 0, 00:26:02.983 "w_mbytes_per_sec": 0 00:26:02.983 }, 00:26:02.983 "claimed": true, 00:26:02.983 "claim_type": "exclusive_write", 00:26:02.983 "zoned": false, 00:26:02.983 "supported_io_types": { 00:26:02.983 "read": true, 00:26:02.983 "write": true, 00:26:02.983 "unmap": true, 00:26:02.983 "flush": true, 00:26:02.983 "reset": true, 00:26:02.983 "nvme_admin": false, 00:26:02.983 "nvme_io": false, 00:26:02.983 "nvme_io_md": false, 00:26:02.983 "write_zeroes": true, 00:26:02.983 "zcopy": true, 00:26:02.983 "get_zone_info": false, 00:26:02.983 "zone_management": false, 00:26:02.983 "zone_append": false, 00:26:02.983 "compare": false, 00:26:02.983 "compare_and_write": false, 00:26:02.983 "abort": true, 00:26:02.983 "seek_hole": false, 00:26:02.983 "seek_data": false, 00:26:02.983 "copy": true, 00:26:02.983 "nvme_iov_md": false 00:26:02.983 }, 00:26:02.983 "memory_domains": [ 00:26:02.983 { 00:26:02.983 "dma_device_id": "system", 00:26:02.983 "dma_device_type": 1 00:26:02.983 }, 00:26:02.983 { 00:26:02.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.983 "dma_device_type": 2 00:26:02.983 } 00:26:02.983 ], 00:26:02.983 "driver_specific": {} 00:26:02.983 } 00:26:02.983 ] 00:26:02.983 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.984 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.243 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.243 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.243 "name": "Existed_Raid", 00:26:03.243 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:03.243 "strip_size_kb": 0, 00:26:03.243 "state": "configuring", 00:26:03.243 "raid_level": "raid1", 00:26:03.243 "superblock": true, 00:26:03.243 "num_base_bdevs": 3, 00:26:03.243 "num_base_bdevs_discovered": 2, 00:26:03.243 "num_base_bdevs_operational": 3, 00:26:03.243 "base_bdevs_list": [ 00:26:03.243 { 00:26:03.243 "name": "BaseBdev1", 00:26:03.243 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:03.243 "is_configured": true, 00:26:03.243 "data_offset": 2048, 00:26:03.243 "data_size": 63488 00:26:03.243 }, 00:26:03.243 { 00:26:03.243 "name": "BaseBdev2", 00:26:03.243 "uuid": "73f3c305-0029-4472-981e-0a5590587969", 00:26:03.243 "is_configured": true, 00:26:03.243 "data_offset": 2048, 00:26:03.243 "data_size": 63488 00:26:03.243 }, 00:26:03.243 { 00:26:03.243 "name": "BaseBdev3", 00:26:03.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.243 "is_configured": false, 00:26:03.243 "data_offset": 0, 00:26:03.243 "data_size": 0 00:26:03.243 } 00:26:03.243 ] 00:26:03.243 }' 00:26:03.243 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.243 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.502 [2024-11-26 17:22:33.504679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:03.502 [2024-11-26 17:22:33.504979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:03.502 [2024-11-26 17:22:33.505008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:03.502 [2024-11-26 17:22:33.505347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:03.502 BaseBdev3 00:26:03.502 [2024-11-26 17:22:33.505561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:03.502 [2024-11-26 17:22:33.505594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:03.502 [2024-11-26 17:22:33.505765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.502 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.503 [ 00:26:03.503 { 00:26:03.503 "name": "BaseBdev3", 00:26:03.503 "aliases": [ 00:26:03.503 "9976dfe8-ad82-42f7-bf42-335779ff2add" 00:26:03.503 ], 00:26:03.503 "product_name": "Malloc disk", 00:26:03.503 "block_size": 512, 00:26:03.503 "num_blocks": 65536, 00:26:03.503 "uuid": "9976dfe8-ad82-42f7-bf42-335779ff2add", 00:26:03.503 "assigned_rate_limits": { 00:26:03.503 "rw_ios_per_sec": 0, 00:26:03.503 "rw_mbytes_per_sec": 0, 00:26:03.503 "r_mbytes_per_sec": 0, 00:26:03.503 "w_mbytes_per_sec": 0 00:26:03.503 }, 00:26:03.503 "claimed": true, 00:26:03.503 "claim_type": "exclusive_write", 00:26:03.503 "zoned": false, 00:26:03.503 "supported_io_types": { 00:26:03.503 "read": true, 00:26:03.503 "write": true, 00:26:03.503 "unmap": true, 00:26:03.503 "flush": true, 00:26:03.503 "reset": true, 00:26:03.503 "nvme_admin": false, 00:26:03.503 "nvme_io": false, 00:26:03.503 "nvme_io_md": false, 00:26:03.503 "write_zeroes": true, 00:26:03.503 "zcopy": true, 00:26:03.503 "get_zone_info": false, 00:26:03.503 "zone_management": false, 00:26:03.503 "zone_append": false, 00:26:03.503 "compare": false, 00:26:03.503 "compare_and_write": false, 00:26:03.503 "abort": true, 00:26:03.503 "seek_hole": false, 00:26:03.503 "seek_data": false, 00:26:03.503 "copy": true, 00:26:03.503 "nvme_iov_md": false 00:26:03.503 }, 00:26:03.503 "memory_domains": [ 00:26:03.503 { 00:26:03.503 "dma_device_id": "system", 00:26:03.503 "dma_device_type": 1 00:26:03.503 }, 00:26:03.503 { 00:26:03.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.503 "dma_device_type": 2 00:26:03.503 } 00:26:03.503 ], 00:26:03.503 "driver_specific": {} 00:26:03.503 } 00:26:03.503 ] 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.503 "name": "Existed_Raid", 00:26:03.503 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:03.503 "strip_size_kb": 0, 00:26:03.503 "state": "online", 00:26:03.503 "raid_level": "raid1", 00:26:03.503 "superblock": true, 00:26:03.503 "num_base_bdevs": 3, 00:26:03.503 "num_base_bdevs_discovered": 3, 00:26:03.503 "num_base_bdevs_operational": 3, 00:26:03.503 "base_bdevs_list": [ 00:26:03.503 { 00:26:03.503 "name": "BaseBdev1", 00:26:03.503 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:03.503 "is_configured": true, 00:26:03.503 "data_offset": 2048, 00:26:03.503 "data_size": 63488 00:26:03.503 }, 00:26:03.503 { 00:26:03.503 "name": "BaseBdev2", 00:26:03.503 "uuid": "73f3c305-0029-4472-981e-0a5590587969", 00:26:03.503 "is_configured": true, 00:26:03.503 "data_offset": 2048, 00:26:03.503 "data_size": 63488 00:26:03.503 }, 00:26:03.503 { 00:26:03.503 "name": "BaseBdev3", 00:26:03.503 "uuid": "9976dfe8-ad82-42f7-bf42-335779ff2add", 00:26:03.503 "is_configured": true, 00:26:03.503 "data_offset": 2048, 00:26:03.503 "data_size": 63488 00:26:03.503 } 00:26:03.503 ] 00:26:03.503 }' 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.503 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.070 [2024-11-26 17:22:33.968474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.070 17:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.070 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.070 "name": "Existed_Raid", 00:26:04.070 "aliases": [ 00:26:04.070 "90afd49d-e861-4847-aa5e-50498c8fc14d" 00:26:04.070 ], 00:26:04.070 "product_name": "Raid Volume", 00:26:04.070 "block_size": 512, 00:26:04.070 "num_blocks": 63488, 00:26:04.070 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:04.070 "assigned_rate_limits": { 00:26:04.070 "rw_ios_per_sec": 0, 00:26:04.070 "rw_mbytes_per_sec": 0, 00:26:04.070 "r_mbytes_per_sec": 0, 00:26:04.070 "w_mbytes_per_sec": 0 00:26:04.070 }, 00:26:04.070 "claimed": false, 00:26:04.070 "zoned": false, 00:26:04.070 "supported_io_types": { 00:26:04.070 "read": true, 00:26:04.070 "write": true, 00:26:04.070 "unmap": false, 00:26:04.070 "flush": false, 00:26:04.070 "reset": true, 00:26:04.070 "nvme_admin": false, 00:26:04.070 "nvme_io": false, 00:26:04.070 "nvme_io_md": false, 00:26:04.070 "write_zeroes": true, 00:26:04.070 "zcopy": false, 00:26:04.070 "get_zone_info": false, 00:26:04.070 "zone_management": false, 00:26:04.070 "zone_append": false, 00:26:04.071 "compare": false, 00:26:04.071 "compare_and_write": false, 00:26:04.071 "abort": false, 00:26:04.071 "seek_hole": false, 00:26:04.071 "seek_data": false, 00:26:04.071 "copy": false, 00:26:04.071 "nvme_iov_md": false 00:26:04.071 }, 00:26:04.071 "memory_domains": [ 00:26:04.071 { 00:26:04.071 "dma_device_id": "system", 00:26:04.071 "dma_device_type": 1 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.071 "dma_device_type": 2 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "dma_device_id": "system", 00:26:04.071 "dma_device_type": 1 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.071 "dma_device_type": 2 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "dma_device_id": "system", 00:26:04.071 "dma_device_type": 1 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.071 "dma_device_type": 2 00:26:04.071 } 00:26:04.071 ], 00:26:04.071 "driver_specific": { 00:26:04.071 "raid": { 00:26:04.071 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:04.071 "strip_size_kb": 0, 00:26:04.071 "state": "online", 00:26:04.071 "raid_level": "raid1", 00:26:04.071 "superblock": true, 00:26:04.071 "num_base_bdevs": 3, 00:26:04.071 "num_base_bdevs_discovered": 3, 00:26:04.071 "num_base_bdevs_operational": 3, 00:26:04.071 "base_bdevs_list": [ 00:26:04.071 { 00:26:04.071 "name": "BaseBdev1", 00:26:04.071 "uuid": "19e3db31-2fee-44a9-bddf-f23e0fc4ac45", 00:26:04.071 "is_configured": true, 00:26:04.071 "data_offset": 2048, 00:26:04.071 "data_size": 63488 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "name": "BaseBdev2", 00:26:04.071 "uuid": "73f3c305-0029-4472-981e-0a5590587969", 00:26:04.071 "is_configured": true, 00:26:04.071 "data_offset": 2048, 00:26:04.071 "data_size": 63488 00:26:04.071 }, 00:26:04.071 { 00:26:04.071 "name": "BaseBdev3", 00:26:04.071 "uuid": "9976dfe8-ad82-42f7-bf42-335779ff2add", 00:26:04.071 "is_configured": true, 00:26:04.071 "data_offset": 2048, 00:26:04.071 "data_size": 63488 00:26:04.071 } 00:26:04.071 ] 00:26:04.071 } 00:26:04.071 } 00:26:04.071 }' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:04.072 BaseBdev2 00:26:04.072 BaseBdev3' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.072 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.331 [2024-11-26 17:22:34.247819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.331 "name": "Existed_Raid", 00:26:04.331 "uuid": "90afd49d-e861-4847-aa5e-50498c8fc14d", 00:26:04.331 "strip_size_kb": 0, 00:26:04.331 "state": "online", 00:26:04.331 "raid_level": "raid1", 00:26:04.331 "superblock": true, 00:26:04.331 "num_base_bdevs": 3, 00:26:04.331 "num_base_bdevs_discovered": 2, 00:26:04.331 "num_base_bdevs_operational": 2, 00:26:04.331 "base_bdevs_list": [ 00:26:04.331 { 00:26:04.331 "name": null, 00:26:04.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.331 "is_configured": false, 00:26:04.331 "data_offset": 0, 00:26:04.331 "data_size": 63488 00:26:04.331 }, 00:26:04.331 { 00:26:04.331 "name": "BaseBdev2", 00:26:04.331 "uuid": "73f3c305-0029-4472-981e-0a5590587969", 00:26:04.331 "is_configured": true, 00:26:04.331 "data_offset": 2048, 00:26:04.331 "data_size": 63488 00:26:04.331 }, 00:26:04.331 { 00:26:04.331 "name": "BaseBdev3", 00:26:04.331 "uuid": "9976dfe8-ad82-42f7-bf42-335779ff2add", 00:26:04.331 "is_configured": true, 00:26:04.331 "data_offset": 2048, 00:26:04.331 "data_size": 63488 00:26:04.331 } 00:26:04.331 ] 00:26:04.331 }' 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.331 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 [2024-11-26 17:22:34.845728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:04.897 17:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:04.898 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.898 17:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.898 [2024-11-26 17:22:35.003875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:04.898 [2024-11-26 17:22:35.004030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.155 [2024-11-26 17:22:35.111374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.155 [2024-11-26 17:22:35.111720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.155 [2024-11-26 17:22:35.111759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.155 BaseBdev2 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.155 [ 00:26:05.155 { 00:26:05.155 "name": "BaseBdev2", 00:26:05.155 "aliases": [ 00:26:05.155 "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2" 00:26:05.155 ], 00:26:05.155 "product_name": "Malloc disk", 00:26:05.155 "block_size": 512, 00:26:05.155 "num_blocks": 65536, 00:26:05.155 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:05.155 "assigned_rate_limits": { 00:26:05.155 "rw_ios_per_sec": 0, 00:26:05.155 "rw_mbytes_per_sec": 0, 00:26:05.155 "r_mbytes_per_sec": 0, 00:26:05.155 "w_mbytes_per_sec": 0 00:26:05.155 }, 00:26:05.155 "claimed": false, 00:26:05.155 "zoned": false, 00:26:05.155 "supported_io_types": { 00:26:05.155 "read": true, 00:26:05.155 "write": true, 00:26:05.155 "unmap": true, 00:26:05.155 "flush": true, 00:26:05.155 "reset": true, 00:26:05.155 "nvme_admin": false, 00:26:05.155 "nvme_io": false, 00:26:05.155 "nvme_io_md": false, 00:26:05.155 "write_zeroes": true, 00:26:05.155 "zcopy": true, 00:26:05.155 "get_zone_info": false, 00:26:05.155 "zone_management": false, 00:26:05.155 "zone_append": false, 00:26:05.155 "compare": false, 00:26:05.155 "compare_and_write": false, 00:26:05.155 "abort": true, 00:26:05.155 "seek_hole": false, 00:26:05.155 "seek_data": false, 00:26:05.155 "copy": true, 00:26:05.155 "nvme_iov_md": false 00:26:05.155 }, 00:26:05.155 "memory_domains": [ 00:26:05.155 { 00:26:05.155 "dma_device_id": "system", 00:26:05.155 "dma_device_type": 1 00:26:05.155 }, 00:26:05.155 { 00:26:05.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.155 "dma_device_type": 2 00:26:05.155 } 00:26:05.155 ], 00:26:05.155 "driver_specific": {} 00:26:05.155 } 00:26:05.155 ] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.155 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.413 BaseBdev3 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.413 [ 00:26:05.413 { 00:26:05.413 "name": "BaseBdev3", 00:26:05.413 "aliases": [ 00:26:05.413 "b40cb1df-c9e7-48f3-aaea-447849f564e2" 00:26:05.413 ], 00:26:05.413 "product_name": "Malloc disk", 00:26:05.413 "block_size": 512, 00:26:05.413 "num_blocks": 65536, 00:26:05.413 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:05.413 "assigned_rate_limits": { 00:26:05.413 "rw_ios_per_sec": 0, 00:26:05.413 "rw_mbytes_per_sec": 0, 00:26:05.413 "r_mbytes_per_sec": 0, 00:26:05.413 "w_mbytes_per_sec": 0 00:26:05.413 }, 00:26:05.413 "claimed": false, 00:26:05.413 "zoned": false, 00:26:05.413 "supported_io_types": { 00:26:05.413 "read": true, 00:26:05.413 "write": true, 00:26:05.413 "unmap": true, 00:26:05.413 "flush": true, 00:26:05.413 "reset": true, 00:26:05.413 "nvme_admin": false, 00:26:05.413 "nvme_io": false, 00:26:05.413 "nvme_io_md": false, 00:26:05.413 "write_zeroes": true, 00:26:05.413 "zcopy": true, 00:26:05.413 "get_zone_info": false, 00:26:05.413 "zone_management": false, 00:26:05.413 "zone_append": false, 00:26:05.413 "compare": false, 00:26:05.413 "compare_and_write": false, 00:26:05.413 "abort": true, 00:26:05.413 "seek_hole": false, 00:26:05.413 "seek_data": false, 00:26:05.413 "copy": true, 00:26:05.413 "nvme_iov_md": false 00:26:05.413 }, 00:26:05.413 "memory_domains": [ 00:26:05.413 { 00:26:05.413 "dma_device_id": "system", 00:26:05.413 "dma_device_type": 1 00:26:05.413 }, 00:26:05.413 { 00:26:05.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.413 "dma_device_type": 2 00:26:05.413 } 00:26:05.413 ], 00:26:05.413 "driver_specific": {} 00:26:05.413 } 00:26:05.413 ] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.413 [2024-11-26 17:22:35.331281] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.413 [2024-11-26 17:22:35.331586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.413 [2024-11-26 17:22:35.331722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.413 [2024-11-26 17:22:35.334631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.413 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.414 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.414 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.414 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.414 "name": "Existed_Raid", 00:26:05.414 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:05.414 "strip_size_kb": 0, 00:26:05.414 "state": "configuring", 00:26:05.414 "raid_level": "raid1", 00:26:05.414 "superblock": true, 00:26:05.414 "num_base_bdevs": 3, 00:26:05.414 "num_base_bdevs_discovered": 2, 00:26:05.414 "num_base_bdevs_operational": 3, 00:26:05.414 "base_bdevs_list": [ 00:26:05.414 { 00:26:05.414 "name": "BaseBdev1", 00:26:05.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.414 "is_configured": false, 00:26:05.414 "data_offset": 0, 00:26:05.414 "data_size": 0 00:26:05.414 }, 00:26:05.414 { 00:26:05.414 "name": "BaseBdev2", 00:26:05.414 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:05.414 "is_configured": true, 00:26:05.414 "data_offset": 2048, 00:26:05.414 "data_size": 63488 00:26:05.414 }, 00:26:05.414 { 00:26:05.414 "name": "BaseBdev3", 00:26:05.414 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:05.414 "is_configured": true, 00:26:05.414 "data_offset": 2048, 00:26:05.414 "data_size": 63488 00:26:05.414 } 00:26:05.414 ] 00:26:05.414 }' 00:26:05.414 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.414 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 [2024-11-26 17:22:35.822748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.978 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.978 "name": "Existed_Raid", 00:26:05.978 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:05.978 "strip_size_kb": 0, 00:26:05.978 "state": "configuring", 00:26:05.978 "raid_level": "raid1", 00:26:05.978 "superblock": true, 00:26:05.978 "num_base_bdevs": 3, 00:26:05.978 "num_base_bdevs_discovered": 1, 00:26:05.978 "num_base_bdevs_operational": 3, 00:26:05.978 "base_bdevs_list": [ 00:26:05.978 { 00:26:05.978 "name": "BaseBdev1", 00:26:05.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.978 "is_configured": false, 00:26:05.978 "data_offset": 0, 00:26:05.978 "data_size": 0 00:26:05.978 }, 00:26:05.978 { 00:26:05.978 "name": null, 00:26:05.979 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:05.979 "is_configured": false, 00:26:05.979 "data_offset": 0, 00:26:05.979 "data_size": 63488 00:26:05.979 }, 00:26:05.979 { 00:26:05.979 "name": "BaseBdev3", 00:26:05.979 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:05.979 "is_configured": true, 00:26:05.979 "data_offset": 2048, 00:26:05.979 "data_size": 63488 00:26:05.979 } 00:26:05.979 ] 00:26:05.979 }' 00:26:05.979 17:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.979 17:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.237 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 [2024-11-26 17:22:36.376805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.496 BaseBdev1 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 [ 00:26:06.496 { 00:26:06.496 "name": "BaseBdev1", 00:26:06.496 "aliases": [ 00:26:06.496 "a9f4486d-0a1a-4148-887d-ef9026edbc5f" 00:26:06.496 ], 00:26:06.496 "product_name": "Malloc disk", 00:26:06.496 "block_size": 512, 00:26:06.496 "num_blocks": 65536, 00:26:06.496 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:06.496 "assigned_rate_limits": { 00:26:06.496 "rw_ios_per_sec": 0, 00:26:06.496 "rw_mbytes_per_sec": 0, 00:26:06.496 "r_mbytes_per_sec": 0, 00:26:06.496 "w_mbytes_per_sec": 0 00:26:06.496 }, 00:26:06.496 "claimed": true, 00:26:06.496 "claim_type": "exclusive_write", 00:26:06.496 "zoned": false, 00:26:06.496 "supported_io_types": { 00:26:06.496 "read": true, 00:26:06.496 "write": true, 00:26:06.496 "unmap": true, 00:26:06.496 "flush": true, 00:26:06.496 "reset": true, 00:26:06.496 "nvme_admin": false, 00:26:06.496 "nvme_io": false, 00:26:06.496 "nvme_io_md": false, 00:26:06.496 "write_zeroes": true, 00:26:06.496 "zcopy": true, 00:26:06.496 "get_zone_info": false, 00:26:06.496 "zone_management": false, 00:26:06.496 "zone_append": false, 00:26:06.496 "compare": false, 00:26:06.496 "compare_and_write": false, 00:26:06.496 "abort": true, 00:26:06.496 "seek_hole": false, 00:26:06.496 "seek_data": false, 00:26:06.496 "copy": true, 00:26:06.496 "nvme_iov_md": false 00:26:06.496 }, 00:26:06.496 "memory_domains": [ 00:26:06.496 { 00:26:06.496 "dma_device_id": "system", 00:26:06.496 "dma_device_type": 1 00:26:06.496 }, 00:26:06.496 { 00:26:06.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.496 "dma_device_type": 2 00:26:06.496 } 00:26:06.496 ], 00:26:06.496 "driver_specific": {} 00:26:06.496 } 00:26:06.496 ] 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:06.496 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.497 "name": "Existed_Raid", 00:26:06.497 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:06.497 "strip_size_kb": 0, 00:26:06.497 "state": "configuring", 00:26:06.497 "raid_level": "raid1", 00:26:06.497 "superblock": true, 00:26:06.497 "num_base_bdevs": 3, 00:26:06.497 "num_base_bdevs_discovered": 2, 00:26:06.497 "num_base_bdevs_operational": 3, 00:26:06.497 "base_bdevs_list": [ 00:26:06.497 { 00:26:06.497 "name": "BaseBdev1", 00:26:06.497 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:06.497 "is_configured": true, 00:26:06.497 "data_offset": 2048, 00:26:06.497 "data_size": 63488 00:26:06.497 }, 00:26:06.497 { 00:26:06.497 "name": null, 00:26:06.497 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:06.497 "is_configured": false, 00:26:06.497 "data_offset": 0, 00:26:06.497 "data_size": 63488 00:26:06.497 }, 00:26:06.497 { 00:26:06.497 "name": "BaseBdev3", 00:26:06.497 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:06.497 "is_configured": true, 00:26:06.497 "data_offset": 2048, 00:26:06.497 "data_size": 63488 00:26:06.497 } 00:26:06.497 ] 00:26:06.497 }' 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.497 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.756 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.756 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.756 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:06.756 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.015 [2024-11-26 17:22:36.904216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.015 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.016 "name": "Existed_Raid", 00:26:07.016 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:07.016 "strip_size_kb": 0, 00:26:07.016 "state": "configuring", 00:26:07.016 "raid_level": "raid1", 00:26:07.016 "superblock": true, 00:26:07.016 "num_base_bdevs": 3, 00:26:07.016 "num_base_bdevs_discovered": 1, 00:26:07.016 "num_base_bdevs_operational": 3, 00:26:07.016 "base_bdevs_list": [ 00:26:07.016 { 00:26:07.016 "name": "BaseBdev1", 00:26:07.016 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:07.016 "is_configured": true, 00:26:07.016 "data_offset": 2048, 00:26:07.016 "data_size": 63488 00:26:07.016 }, 00:26:07.016 { 00:26:07.016 "name": null, 00:26:07.016 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:07.016 "is_configured": false, 00:26:07.016 "data_offset": 0, 00:26:07.016 "data_size": 63488 00:26:07.016 }, 00:26:07.016 { 00:26:07.016 "name": null, 00:26:07.016 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:07.016 "is_configured": false, 00:26:07.016 "data_offset": 0, 00:26:07.016 "data_size": 63488 00:26:07.016 } 00:26:07.016 ] 00:26:07.016 }' 00:26:07.016 17:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.016 17:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.275 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.533 [2024-11-26 17:22:37.391593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.533 "name": "Existed_Raid", 00:26:07.533 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:07.533 "strip_size_kb": 0, 00:26:07.533 "state": "configuring", 00:26:07.533 "raid_level": "raid1", 00:26:07.533 "superblock": true, 00:26:07.533 "num_base_bdevs": 3, 00:26:07.533 "num_base_bdevs_discovered": 2, 00:26:07.533 "num_base_bdevs_operational": 3, 00:26:07.533 "base_bdevs_list": [ 00:26:07.533 { 00:26:07.533 "name": "BaseBdev1", 00:26:07.533 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:07.533 "is_configured": true, 00:26:07.533 "data_offset": 2048, 00:26:07.533 "data_size": 63488 00:26:07.533 }, 00:26:07.533 { 00:26:07.533 "name": null, 00:26:07.533 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:07.533 "is_configured": false, 00:26:07.533 "data_offset": 0, 00:26:07.533 "data_size": 63488 00:26:07.533 }, 00:26:07.533 { 00:26:07.533 "name": "BaseBdev3", 00:26:07.533 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:07.533 "is_configured": true, 00:26:07.533 "data_offset": 2048, 00:26:07.533 "data_size": 63488 00:26:07.533 } 00:26:07.533 ] 00:26:07.533 }' 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.533 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.791 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.791 [2024-11-26 17:22:37.886883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.050 17:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.050 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.050 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.050 "name": "Existed_Raid", 00:26:08.050 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:08.050 "strip_size_kb": 0, 00:26:08.050 "state": "configuring", 00:26:08.050 "raid_level": "raid1", 00:26:08.050 "superblock": true, 00:26:08.050 "num_base_bdevs": 3, 00:26:08.050 "num_base_bdevs_discovered": 1, 00:26:08.050 "num_base_bdevs_operational": 3, 00:26:08.050 "base_bdevs_list": [ 00:26:08.050 { 00:26:08.050 "name": null, 00:26:08.050 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:08.050 "is_configured": false, 00:26:08.050 "data_offset": 0, 00:26:08.050 "data_size": 63488 00:26:08.050 }, 00:26:08.050 { 00:26:08.050 "name": null, 00:26:08.050 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:08.050 "is_configured": false, 00:26:08.050 "data_offset": 0, 00:26:08.050 "data_size": 63488 00:26:08.050 }, 00:26:08.050 { 00:26:08.050 "name": "BaseBdev3", 00:26:08.050 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:08.050 "is_configured": true, 00:26:08.050 "data_offset": 2048, 00:26:08.050 "data_size": 63488 00:26:08.050 } 00:26:08.050 ] 00:26:08.050 }' 00:26:08.050 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.050 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.309 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.309 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:08.309 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.310 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.569 [2024-11-26 17:22:38.445743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.569 "name": "Existed_Raid", 00:26:08.569 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:08.569 "strip_size_kb": 0, 00:26:08.569 "state": "configuring", 00:26:08.569 "raid_level": "raid1", 00:26:08.569 "superblock": true, 00:26:08.569 "num_base_bdevs": 3, 00:26:08.569 "num_base_bdevs_discovered": 2, 00:26:08.569 "num_base_bdevs_operational": 3, 00:26:08.569 "base_bdevs_list": [ 00:26:08.569 { 00:26:08.569 "name": null, 00:26:08.569 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:08.569 "is_configured": false, 00:26:08.569 "data_offset": 0, 00:26:08.569 "data_size": 63488 00:26:08.569 }, 00:26:08.569 { 00:26:08.569 "name": "BaseBdev2", 00:26:08.569 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:08.569 "is_configured": true, 00:26:08.569 "data_offset": 2048, 00:26:08.569 "data_size": 63488 00:26:08.569 }, 00:26:08.569 { 00:26:08.569 "name": "BaseBdev3", 00:26:08.569 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:08.569 "is_configured": true, 00:26:08.569 "data_offset": 2048, 00:26:08.569 "data_size": 63488 00:26:08.569 } 00:26:08.569 ] 00:26:08.569 }' 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.569 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.828 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a9f4486d-0a1a-4148-887d-ef9026edbc5f 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.087 [2024-11-26 17:22:38.998929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:09.087 [2024-11-26 17:22:38.999468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:09.087 NewBaseBdev 00:26:09.087 [2024-11-26 17:22:38.999632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:09.087 [2024-11-26 17:22:38.999967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:09.087 [2024-11-26 17:22:39.000148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:09.087 [2024-11-26 17:22:39.000164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:09.087 [2024-11-26 17:22:39.000319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:09.087 17:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.087 [ 00:26:09.087 { 00:26:09.087 "name": "NewBaseBdev", 00:26:09.087 "aliases": [ 00:26:09.087 "a9f4486d-0a1a-4148-887d-ef9026edbc5f" 00:26:09.087 ], 00:26:09.087 "product_name": "Malloc disk", 00:26:09.087 "block_size": 512, 00:26:09.087 "num_blocks": 65536, 00:26:09.087 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:09.087 "assigned_rate_limits": { 00:26:09.087 "rw_ios_per_sec": 0, 00:26:09.087 "rw_mbytes_per_sec": 0, 00:26:09.087 "r_mbytes_per_sec": 0, 00:26:09.087 "w_mbytes_per_sec": 0 00:26:09.087 }, 00:26:09.087 "claimed": true, 00:26:09.087 "claim_type": "exclusive_write", 00:26:09.087 "zoned": false, 00:26:09.087 "supported_io_types": { 00:26:09.087 "read": true, 00:26:09.087 "write": true, 00:26:09.087 "unmap": true, 00:26:09.087 "flush": true, 00:26:09.087 "reset": true, 00:26:09.087 "nvme_admin": false, 00:26:09.087 "nvme_io": false, 00:26:09.087 "nvme_io_md": false, 00:26:09.087 "write_zeroes": true, 00:26:09.087 "zcopy": true, 00:26:09.087 "get_zone_info": false, 00:26:09.087 "zone_management": false, 00:26:09.087 "zone_append": false, 00:26:09.087 "compare": false, 00:26:09.087 "compare_and_write": false, 00:26:09.087 "abort": true, 00:26:09.087 "seek_hole": false, 00:26:09.087 "seek_data": false, 00:26:09.087 "copy": true, 00:26:09.087 "nvme_iov_md": false 00:26:09.087 }, 00:26:09.087 "memory_domains": [ 00:26:09.087 { 00:26:09.087 "dma_device_id": "system", 00:26:09.087 "dma_device_type": 1 00:26:09.087 }, 00:26:09.087 { 00:26:09.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.087 "dma_device_type": 2 00:26:09.087 } 00:26:09.087 ], 00:26:09.087 "driver_specific": {} 00:26:09.087 } 00:26:09.087 ] 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.087 "name": "Existed_Raid", 00:26:09.087 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:09.087 "strip_size_kb": 0, 00:26:09.087 "state": "online", 00:26:09.087 "raid_level": "raid1", 00:26:09.087 "superblock": true, 00:26:09.087 "num_base_bdevs": 3, 00:26:09.087 "num_base_bdevs_discovered": 3, 00:26:09.087 "num_base_bdevs_operational": 3, 00:26:09.087 "base_bdevs_list": [ 00:26:09.087 { 00:26:09.087 "name": "NewBaseBdev", 00:26:09.087 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:09.087 "is_configured": true, 00:26:09.087 "data_offset": 2048, 00:26:09.087 "data_size": 63488 00:26:09.087 }, 00:26:09.087 { 00:26:09.087 "name": "BaseBdev2", 00:26:09.087 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:09.087 "is_configured": true, 00:26:09.087 "data_offset": 2048, 00:26:09.087 "data_size": 63488 00:26:09.087 }, 00:26:09.087 { 00:26:09.087 "name": "BaseBdev3", 00:26:09.087 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:09.087 "is_configured": true, 00:26:09.087 "data_offset": 2048, 00:26:09.087 "data_size": 63488 00:26:09.087 } 00:26:09.087 ] 00:26:09.087 }' 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.087 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:09.345 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.604 [2024-11-26 17:22:39.466703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:09.604 "name": "Existed_Raid", 00:26:09.604 "aliases": [ 00:26:09.604 "801ee173-7d34-4393-920f-82e2ced16ea0" 00:26:09.604 ], 00:26:09.604 "product_name": "Raid Volume", 00:26:09.604 "block_size": 512, 00:26:09.604 "num_blocks": 63488, 00:26:09.604 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:09.604 "assigned_rate_limits": { 00:26:09.604 "rw_ios_per_sec": 0, 00:26:09.604 "rw_mbytes_per_sec": 0, 00:26:09.604 "r_mbytes_per_sec": 0, 00:26:09.604 "w_mbytes_per_sec": 0 00:26:09.604 }, 00:26:09.604 "claimed": false, 00:26:09.604 "zoned": false, 00:26:09.604 "supported_io_types": { 00:26:09.604 "read": true, 00:26:09.604 "write": true, 00:26:09.604 "unmap": false, 00:26:09.604 "flush": false, 00:26:09.604 "reset": true, 00:26:09.604 "nvme_admin": false, 00:26:09.604 "nvme_io": false, 00:26:09.604 "nvme_io_md": false, 00:26:09.604 "write_zeroes": true, 00:26:09.604 "zcopy": false, 00:26:09.604 "get_zone_info": false, 00:26:09.604 "zone_management": false, 00:26:09.604 "zone_append": false, 00:26:09.604 "compare": false, 00:26:09.604 "compare_and_write": false, 00:26:09.604 "abort": false, 00:26:09.604 "seek_hole": false, 00:26:09.604 "seek_data": false, 00:26:09.604 "copy": false, 00:26:09.604 "nvme_iov_md": false 00:26:09.604 }, 00:26:09.604 "memory_domains": [ 00:26:09.604 { 00:26:09.604 "dma_device_id": "system", 00:26:09.604 "dma_device_type": 1 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.604 "dma_device_type": 2 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "dma_device_id": "system", 00:26:09.604 "dma_device_type": 1 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.604 "dma_device_type": 2 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "dma_device_id": "system", 00:26:09.604 "dma_device_type": 1 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.604 "dma_device_type": 2 00:26:09.604 } 00:26:09.604 ], 00:26:09.604 "driver_specific": { 00:26:09.604 "raid": { 00:26:09.604 "uuid": "801ee173-7d34-4393-920f-82e2ced16ea0", 00:26:09.604 "strip_size_kb": 0, 00:26:09.604 "state": "online", 00:26:09.604 "raid_level": "raid1", 00:26:09.604 "superblock": true, 00:26:09.604 "num_base_bdevs": 3, 00:26:09.604 "num_base_bdevs_discovered": 3, 00:26:09.604 "num_base_bdevs_operational": 3, 00:26:09.604 "base_bdevs_list": [ 00:26:09.604 { 00:26:09.604 "name": "NewBaseBdev", 00:26:09.604 "uuid": "a9f4486d-0a1a-4148-887d-ef9026edbc5f", 00:26:09.604 "is_configured": true, 00:26:09.604 "data_offset": 2048, 00:26:09.604 "data_size": 63488 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "name": "BaseBdev2", 00:26:09.604 "uuid": "1dab5e49-ab49-42a5-96ff-c26f37bb0ad2", 00:26:09.604 "is_configured": true, 00:26:09.604 "data_offset": 2048, 00:26:09.604 "data_size": 63488 00:26:09.604 }, 00:26:09.604 { 00:26:09.604 "name": "BaseBdev3", 00:26:09.604 "uuid": "b40cb1df-c9e7-48f3-aaea-447849f564e2", 00:26:09.604 "is_configured": true, 00:26:09.604 "data_offset": 2048, 00:26:09.604 "data_size": 63488 00:26:09.604 } 00:26:09.604 ] 00:26:09.604 } 00:26:09.604 } 00:26:09.604 }' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:09.604 BaseBdev2 00:26:09.604 BaseBdev3' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:09.604 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.863 [2024-11-26 17:22:39.769947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:09.863 [2024-11-26 17:22:39.770008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.863 [2024-11-26 17:22:39.770102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.863 [2024-11-26 17:22:39.770409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.863 [2024-11-26 17:22:39.770423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68111 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68111 ']' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68111 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68111 00:26:09.863 killing process with pid 68111 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68111' 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68111 00:26:09.863 17:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68111 00:26:09.863 [2024-11-26 17:22:39.823667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.123 [2024-11-26 17:22:40.141426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:11.501 17:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:11.501 00:26:11.501 real 0m10.846s 00:26:11.501 user 0m17.125s 00:26:11.501 sys 0m2.191s 00:26:11.501 17:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.501 17:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.501 ************************************ 00:26:11.501 END TEST raid_state_function_test_sb 00:26:11.501 ************************************ 00:26:11.501 17:22:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:26:11.501 17:22:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:11.501 17:22:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.501 17:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.501 ************************************ 00:26:11.501 START TEST raid_superblock_test 00:26:11.501 ************************************ 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68737 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68737 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68737 ']' 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.501 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.502 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.502 17:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.502 [2024-11-26 17:22:41.556168] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:11.502 [2024-11-26 17:22:41.556327] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68737 ] 00:26:11.761 [2024-11-26 17:22:41.753916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.030 [2024-11-26 17:22:41.907162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.288 [2024-11-26 17:22:42.142388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.288 [2024-11-26 17:22:42.142471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.548 malloc1 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.548 [2024-11-26 17:22:42.486264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:12.548 [2024-11-26 17:22:42.486350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.548 [2024-11-26 17:22:42.486380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:12.548 [2024-11-26 17:22:42.486394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.548 [2024-11-26 17:22:42.489103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.548 [2024-11-26 17:22:42.489150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:12.548 pt1 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.548 malloc2 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.548 [2024-11-26 17:22:42.551601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:12.548 [2024-11-26 17:22:42.551911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.548 [2024-11-26 17:22:42.551960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:12.548 [2024-11-26 17:22:42.551975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.548 [2024-11-26 17:22:42.555152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.548 [2024-11-26 17:22:42.555334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:12.548 pt2 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:12.548 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.549 malloc3 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.549 [2024-11-26 17:22:42.622698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:12.549 [2024-11-26 17:22:42.622780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.549 [2024-11-26 17:22:42.622807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:12.549 [2024-11-26 17:22:42.622819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.549 [2024-11-26 17:22:42.625420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.549 [2024-11-26 17:22:42.625471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:12.549 pt3 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.549 [2024-11-26 17:22:42.634741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:12.549 [2024-11-26 17:22:42.637011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:12.549 [2024-11-26 17:22:42.637298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:12.549 [2024-11-26 17:22:42.637501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:12.549 [2024-11-26 17:22:42.637546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:12.549 [2024-11-26 17:22:42.637827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:12.549 [2024-11-26 17:22:42.638014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:12.549 [2024-11-26 17:22:42.638029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:12.549 [2024-11-26 17:22:42.638214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.549 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.808 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.808 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.808 "name": "raid_bdev1", 00:26:12.808 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:12.808 "strip_size_kb": 0, 00:26:12.808 "state": "online", 00:26:12.808 "raid_level": "raid1", 00:26:12.808 "superblock": true, 00:26:12.808 "num_base_bdevs": 3, 00:26:12.808 "num_base_bdevs_discovered": 3, 00:26:12.808 "num_base_bdevs_operational": 3, 00:26:12.808 "base_bdevs_list": [ 00:26:12.808 { 00:26:12.808 "name": "pt1", 00:26:12.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:12.808 "is_configured": true, 00:26:12.808 "data_offset": 2048, 00:26:12.808 "data_size": 63488 00:26:12.808 }, 00:26:12.808 { 00:26:12.808 "name": "pt2", 00:26:12.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:12.808 "is_configured": true, 00:26:12.808 "data_offset": 2048, 00:26:12.808 "data_size": 63488 00:26:12.808 }, 00:26:12.808 { 00:26:12.808 "name": "pt3", 00:26:12.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:12.808 "is_configured": true, 00:26:12.808 "data_offset": 2048, 00:26:12.808 "data_size": 63488 00:26:12.808 } 00:26:12.808 ] 00:26:12.808 }' 00:26:12.808 17:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.808 17:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.067 [2024-11-26 17:22:43.106403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.067 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.068 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.068 "name": "raid_bdev1", 00:26:13.068 "aliases": [ 00:26:13.068 "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9" 00:26:13.068 ], 00:26:13.068 "product_name": "Raid Volume", 00:26:13.068 "block_size": 512, 00:26:13.068 "num_blocks": 63488, 00:26:13.068 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:13.068 "assigned_rate_limits": { 00:26:13.068 "rw_ios_per_sec": 0, 00:26:13.068 "rw_mbytes_per_sec": 0, 00:26:13.068 "r_mbytes_per_sec": 0, 00:26:13.068 "w_mbytes_per_sec": 0 00:26:13.068 }, 00:26:13.068 "claimed": false, 00:26:13.068 "zoned": false, 00:26:13.068 "supported_io_types": { 00:26:13.068 "read": true, 00:26:13.068 "write": true, 00:26:13.068 "unmap": false, 00:26:13.068 "flush": false, 00:26:13.068 "reset": true, 00:26:13.068 "nvme_admin": false, 00:26:13.068 "nvme_io": false, 00:26:13.068 "nvme_io_md": false, 00:26:13.068 "write_zeroes": true, 00:26:13.068 "zcopy": false, 00:26:13.068 "get_zone_info": false, 00:26:13.068 "zone_management": false, 00:26:13.068 "zone_append": false, 00:26:13.068 "compare": false, 00:26:13.068 "compare_and_write": false, 00:26:13.068 "abort": false, 00:26:13.068 "seek_hole": false, 00:26:13.068 "seek_data": false, 00:26:13.068 "copy": false, 00:26:13.068 "nvme_iov_md": false 00:26:13.068 }, 00:26:13.068 "memory_domains": [ 00:26:13.068 { 00:26:13.068 "dma_device_id": "system", 00:26:13.068 "dma_device_type": 1 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.068 "dma_device_type": 2 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "dma_device_id": "system", 00:26:13.068 "dma_device_type": 1 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.068 "dma_device_type": 2 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "dma_device_id": "system", 00:26:13.068 "dma_device_type": 1 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.068 "dma_device_type": 2 00:26:13.068 } 00:26:13.068 ], 00:26:13.068 "driver_specific": { 00:26:13.068 "raid": { 00:26:13.068 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:13.068 "strip_size_kb": 0, 00:26:13.068 "state": "online", 00:26:13.068 "raid_level": "raid1", 00:26:13.068 "superblock": true, 00:26:13.068 "num_base_bdevs": 3, 00:26:13.068 "num_base_bdevs_discovered": 3, 00:26:13.068 "num_base_bdevs_operational": 3, 00:26:13.068 "base_bdevs_list": [ 00:26:13.068 { 00:26:13.068 "name": "pt1", 00:26:13.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.068 "is_configured": true, 00:26:13.068 "data_offset": 2048, 00:26:13.068 "data_size": 63488 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "name": "pt2", 00:26:13.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.068 "is_configured": true, 00:26:13.068 "data_offset": 2048, 00:26:13.068 "data_size": 63488 00:26:13.068 }, 00:26:13.068 { 00:26:13.068 "name": "pt3", 00:26:13.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:13.068 "is_configured": true, 00:26:13.068 "data_offset": 2048, 00:26:13.068 "data_size": 63488 00:26:13.068 } 00:26:13.068 ] 00:26:13.068 } 00:26:13.068 } 00:26:13.068 }' 00:26:13.068 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:13.068 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:13.068 pt2 00:26:13.068 pt3' 00:26:13.068 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:13.328 [2024-11-26 17:22:43.374050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 ']' 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.328 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.328 [2024-11-26 17:22:43.417723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:13.328 [2024-11-26 17:22:43.417777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:13.329 [2024-11-26 17:22:43.417888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:13.329 [2024-11-26 17:22:43.417981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:13.329 [2024-11-26 17:22:43.417994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:13.329 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.588 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 [2024-11-26 17:22:43.573777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:13.589 [2024-11-26 17:22:43.576523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:13.589 [2024-11-26 17:22:43.576620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:13.589 [2024-11-26 17:22:43.576687] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:13.589 [2024-11-26 17:22:43.576760] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:13.589 [2024-11-26 17:22:43.576784] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:13.589 [2024-11-26 17:22:43.576808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:13.589 [2024-11-26 17:22:43.576820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:13.589 request: 00:26:13.589 { 00:26:13.589 "name": "raid_bdev1", 00:26:13.589 "raid_level": "raid1", 00:26:13.589 "base_bdevs": [ 00:26:13.589 "malloc1", 00:26:13.589 "malloc2", 00:26:13.589 "malloc3" 00:26:13.589 ], 00:26:13.589 "superblock": false, 00:26:13.589 "method": "bdev_raid_create", 00:26:13.589 "req_id": 1 00:26:13.589 } 00:26:13.589 Got JSON-RPC error response 00:26:13.589 response: 00:26:13.589 { 00:26:13.589 "code": -17, 00:26:13.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:13.589 } 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 [2024-11-26 17:22:43.645690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:13.589 [2024-11-26 17:22:43.645983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.589 [2024-11-26 17:22:43.646026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:13.589 [2024-11-26 17:22:43.646041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.589 [2024-11-26 17:22:43.649202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.589 [2024-11-26 17:22:43.649385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:13.589 [2024-11-26 17:22:43.649540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:13.589 [2024-11-26 17:22:43.649613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:13.589 pt1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.848 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.848 "name": "raid_bdev1", 00:26:13.848 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:13.848 "strip_size_kb": 0, 00:26:13.848 "state": "configuring", 00:26:13.848 "raid_level": "raid1", 00:26:13.848 "superblock": true, 00:26:13.848 "num_base_bdevs": 3, 00:26:13.848 "num_base_bdevs_discovered": 1, 00:26:13.848 "num_base_bdevs_operational": 3, 00:26:13.848 "base_bdevs_list": [ 00:26:13.848 { 00:26:13.848 "name": "pt1", 00:26:13.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.848 "is_configured": true, 00:26:13.848 "data_offset": 2048, 00:26:13.848 "data_size": 63488 00:26:13.848 }, 00:26:13.848 { 00:26:13.848 "name": null, 00:26:13.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.848 "is_configured": false, 00:26:13.848 "data_offset": 2048, 00:26:13.848 "data_size": 63488 00:26:13.848 }, 00:26:13.848 { 00:26:13.848 "name": null, 00:26:13.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:13.848 "is_configured": false, 00:26:13.848 "data_offset": 2048, 00:26:13.848 "data_size": 63488 00:26:13.848 } 00:26:13.848 ] 00:26:13.848 }' 00:26:13.848 17:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.848 17:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 [2024-11-26 17:22:44.105710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.108 [2024-11-26 17:22:44.106063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.108 [2024-11-26 17:22:44.106108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:14.108 [2024-11-26 17:22:44.106123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.108 [2024-11-26 17:22:44.106710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.108 [2024-11-26 17:22:44.106748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.108 [2024-11-26 17:22:44.106869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.108 [2024-11-26 17:22:44.106903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.108 pt2 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 [2024-11-26 17:22:44.113645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.108 "name": "raid_bdev1", 00:26:14.108 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:14.108 "strip_size_kb": 0, 00:26:14.108 "state": "configuring", 00:26:14.108 "raid_level": "raid1", 00:26:14.108 "superblock": true, 00:26:14.108 "num_base_bdevs": 3, 00:26:14.108 "num_base_bdevs_discovered": 1, 00:26:14.108 "num_base_bdevs_operational": 3, 00:26:14.108 "base_bdevs_list": [ 00:26:14.108 { 00:26:14.108 "name": "pt1", 00:26:14.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.108 "is_configured": true, 00:26:14.108 "data_offset": 2048, 00:26:14.108 "data_size": 63488 00:26:14.108 }, 00:26:14.108 { 00:26:14.108 "name": null, 00:26:14.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.108 "is_configured": false, 00:26:14.108 "data_offset": 0, 00:26:14.108 "data_size": 63488 00:26:14.108 }, 00:26:14.108 { 00:26:14.108 "name": null, 00:26:14.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.108 "is_configured": false, 00:26:14.108 "data_offset": 2048, 00:26:14.108 "data_size": 63488 00:26:14.108 } 00:26:14.108 ] 00:26:14.108 }' 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.108 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.677 [2024-11-26 17:22:44.533683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.677 [2024-11-26 17:22:44.534037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.677 [2024-11-26 17:22:44.534074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:14.677 [2024-11-26 17:22:44.534092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.677 [2024-11-26 17:22:44.534699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.677 [2024-11-26 17:22:44.534735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.677 [2024-11-26 17:22:44.534838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.677 [2024-11-26 17:22:44.534880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.677 pt2 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.677 [2024-11-26 17:22:44.545669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:14.677 [2024-11-26 17:22:44.545757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.677 [2024-11-26 17:22:44.545781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:14.677 [2024-11-26 17:22:44.545796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.677 [2024-11-26 17:22:44.546330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.677 [2024-11-26 17:22:44.546359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:14.677 [2024-11-26 17:22:44.546454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:14.677 [2024-11-26 17:22:44.546483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:14.677 [2024-11-26 17:22:44.546671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:14.677 [2024-11-26 17:22:44.546690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:14.677 [2024-11-26 17:22:44.546973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:14.677 [2024-11-26 17:22:44.547146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:14.677 [2024-11-26 17:22:44.547156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:14.677 [2024-11-26 17:22:44.547321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.677 pt3 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.677 "name": "raid_bdev1", 00:26:14.677 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:14.677 "strip_size_kb": 0, 00:26:14.677 "state": "online", 00:26:14.677 "raid_level": "raid1", 00:26:14.677 "superblock": true, 00:26:14.677 "num_base_bdevs": 3, 00:26:14.677 "num_base_bdevs_discovered": 3, 00:26:14.677 "num_base_bdevs_operational": 3, 00:26:14.677 "base_bdevs_list": [ 00:26:14.677 { 00:26:14.677 "name": "pt1", 00:26:14.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.677 "is_configured": true, 00:26:14.677 "data_offset": 2048, 00:26:14.677 "data_size": 63488 00:26:14.677 }, 00:26:14.677 { 00:26:14.677 "name": "pt2", 00:26:14.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.677 "is_configured": true, 00:26:14.677 "data_offset": 2048, 00:26:14.677 "data_size": 63488 00:26:14.677 }, 00:26:14.677 { 00:26:14.677 "name": "pt3", 00:26:14.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.677 "is_configured": true, 00:26:14.677 "data_offset": 2048, 00:26:14.677 "data_size": 63488 00:26:14.677 } 00:26:14.677 ] 00:26:14.677 }' 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.677 17:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:14.937 [2024-11-26 17:22:45.018044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:14.937 "name": "raid_bdev1", 00:26:14.937 "aliases": [ 00:26:14.937 "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9" 00:26:14.937 ], 00:26:14.937 "product_name": "Raid Volume", 00:26:14.937 "block_size": 512, 00:26:14.937 "num_blocks": 63488, 00:26:14.937 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:14.937 "assigned_rate_limits": { 00:26:14.937 "rw_ios_per_sec": 0, 00:26:14.937 "rw_mbytes_per_sec": 0, 00:26:14.937 "r_mbytes_per_sec": 0, 00:26:14.937 "w_mbytes_per_sec": 0 00:26:14.937 }, 00:26:14.937 "claimed": false, 00:26:14.937 "zoned": false, 00:26:14.937 "supported_io_types": { 00:26:14.937 "read": true, 00:26:14.937 "write": true, 00:26:14.937 "unmap": false, 00:26:14.937 "flush": false, 00:26:14.937 "reset": true, 00:26:14.937 "nvme_admin": false, 00:26:14.937 "nvme_io": false, 00:26:14.937 "nvme_io_md": false, 00:26:14.937 "write_zeroes": true, 00:26:14.937 "zcopy": false, 00:26:14.937 "get_zone_info": false, 00:26:14.937 "zone_management": false, 00:26:14.937 "zone_append": false, 00:26:14.937 "compare": false, 00:26:14.937 "compare_and_write": false, 00:26:14.937 "abort": false, 00:26:14.937 "seek_hole": false, 00:26:14.937 "seek_data": false, 00:26:14.937 "copy": false, 00:26:14.937 "nvme_iov_md": false 00:26:14.937 }, 00:26:14.937 "memory_domains": [ 00:26:14.937 { 00:26:14.937 "dma_device_id": "system", 00:26:14.937 "dma_device_type": 1 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.937 "dma_device_type": 2 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "dma_device_id": "system", 00:26:14.937 "dma_device_type": 1 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.937 "dma_device_type": 2 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "dma_device_id": "system", 00:26:14.937 "dma_device_type": 1 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.937 "dma_device_type": 2 00:26:14.937 } 00:26:14.937 ], 00:26:14.937 "driver_specific": { 00:26:14.937 "raid": { 00:26:14.937 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:14.937 "strip_size_kb": 0, 00:26:14.937 "state": "online", 00:26:14.937 "raid_level": "raid1", 00:26:14.937 "superblock": true, 00:26:14.937 "num_base_bdevs": 3, 00:26:14.937 "num_base_bdevs_discovered": 3, 00:26:14.937 "num_base_bdevs_operational": 3, 00:26:14.937 "base_bdevs_list": [ 00:26:14.937 { 00:26:14.937 "name": "pt1", 00:26:14.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.937 "is_configured": true, 00:26:14.937 "data_offset": 2048, 00:26:14.937 "data_size": 63488 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "name": "pt2", 00:26:14.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.937 "is_configured": true, 00:26:14.937 "data_offset": 2048, 00:26:14.937 "data_size": 63488 00:26:14.937 }, 00:26:14.937 { 00:26:14.937 "name": "pt3", 00:26:14.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.937 "is_configured": true, 00:26:14.937 "data_offset": 2048, 00:26:14.937 "data_size": 63488 00:26:14.937 } 00:26:14.937 ] 00:26:14.937 } 00:26:14.937 } 00:26:14.937 }' 00:26:14.937 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:15.196 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:15.196 pt2 00:26:15.196 pt3' 00:26:15.196 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:15.197 [2024-11-26 17:22:45.261966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 '!=' 0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 ']' 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.456 [2024-11-26 17:22:45.313826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:15.456 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.457 "name": "raid_bdev1", 00:26:15.457 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:15.457 "strip_size_kb": 0, 00:26:15.457 "state": "online", 00:26:15.457 "raid_level": "raid1", 00:26:15.457 "superblock": true, 00:26:15.457 "num_base_bdevs": 3, 00:26:15.457 "num_base_bdevs_discovered": 2, 00:26:15.457 "num_base_bdevs_operational": 2, 00:26:15.457 "base_bdevs_list": [ 00:26:15.457 { 00:26:15.457 "name": null, 00:26:15.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.457 "is_configured": false, 00:26:15.457 "data_offset": 0, 00:26:15.457 "data_size": 63488 00:26:15.457 }, 00:26:15.457 { 00:26:15.457 "name": "pt2", 00:26:15.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.457 "is_configured": true, 00:26:15.457 "data_offset": 2048, 00:26:15.457 "data_size": 63488 00:26:15.457 }, 00:26:15.457 { 00:26:15.457 "name": "pt3", 00:26:15.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.457 "is_configured": true, 00:26:15.457 "data_offset": 2048, 00:26:15.457 "data_size": 63488 00:26:15.457 } 00:26:15.457 ] 00:26:15.457 }' 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.457 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.716 [2024-11-26 17:22:45.777710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.716 [2024-11-26 17:22:45.777988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:15.716 [2024-11-26 17:22:45.778121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:15.716 [2024-11-26 17:22:45.778196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:15.716 [2024-11-26 17:22:45.778219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:15.716 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.975 [2024-11-26 17:22:45.853606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:15.975 [2024-11-26 17:22:45.853694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.975 [2024-11-26 17:22:45.853715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:15.975 [2024-11-26 17:22:45.853731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.975 [2024-11-26 17:22:45.856538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.975 [2024-11-26 17:22:45.856583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:15.975 [2024-11-26 17:22:45.856674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:15.975 [2024-11-26 17:22:45.856737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:15.975 pt2 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:15.975 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.976 "name": "raid_bdev1", 00:26:15.976 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:15.976 "strip_size_kb": 0, 00:26:15.976 "state": "configuring", 00:26:15.976 "raid_level": "raid1", 00:26:15.976 "superblock": true, 00:26:15.976 "num_base_bdevs": 3, 00:26:15.976 "num_base_bdevs_discovered": 1, 00:26:15.976 "num_base_bdevs_operational": 2, 00:26:15.976 "base_bdevs_list": [ 00:26:15.976 { 00:26:15.976 "name": null, 00:26:15.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.976 "is_configured": false, 00:26:15.976 "data_offset": 2048, 00:26:15.976 "data_size": 63488 00:26:15.976 }, 00:26:15.976 { 00:26:15.976 "name": "pt2", 00:26:15.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.976 "is_configured": true, 00:26:15.976 "data_offset": 2048, 00:26:15.976 "data_size": 63488 00:26:15.976 }, 00:26:15.976 { 00:26:15.976 "name": null, 00:26:15.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.976 "is_configured": false, 00:26:15.976 "data_offset": 2048, 00:26:15.976 "data_size": 63488 00:26:15.976 } 00:26:15.976 ] 00:26:15.976 }' 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.976 17:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.235 [2024-11-26 17:22:46.301717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:16.235 [2024-11-26 17:22:46.301822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.235 [2024-11-26 17:22:46.301852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:16.235 [2024-11-26 17:22:46.301869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.235 [2024-11-26 17:22:46.302462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.235 [2024-11-26 17:22:46.302506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:16.235 [2024-11-26 17:22:46.302633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:16.235 [2024-11-26 17:22:46.302675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:16.235 [2024-11-26 17:22:46.302820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:16.235 [2024-11-26 17:22:46.302916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:16.235 [2024-11-26 17:22:46.303269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:16.235 [2024-11-26 17:22:46.303436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:16.235 [2024-11-26 17:22:46.303447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:16.235 [2024-11-26 17:22:46.303617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.235 pt3 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.235 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.495 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.495 "name": "raid_bdev1", 00:26:16.495 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:16.495 "strip_size_kb": 0, 00:26:16.495 "state": "online", 00:26:16.495 "raid_level": "raid1", 00:26:16.495 "superblock": true, 00:26:16.495 "num_base_bdevs": 3, 00:26:16.495 "num_base_bdevs_discovered": 2, 00:26:16.495 "num_base_bdevs_operational": 2, 00:26:16.495 "base_bdevs_list": [ 00:26:16.495 { 00:26:16.495 "name": null, 00:26:16.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.495 "is_configured": false, 00:26:16.495 "data_offset": 2048, 00:26:16.495 "data_size": 63488 00:26:16.495 }, 00:26:16.495 { 00:26:16.495 "name": "pt2", 00:26:16.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.495 "is_configured": true, 00:26:16.495 "data_offset": 2048, 00:26:16.495 "data_size": 63488 00:26:16.495 }, 00:26:16.495 { 00:26:16.495 "name": "pt3", 00:26:16.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.495 "is_configured": true, 00:26:16.495 "data_offset": 2048, 00:26:16.495 "data_size": 63488 00:26:16.495 } 00:26:16.495 ] 00:26:16.495 }' 00:26:16.495 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.495 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.754 [2024-11-26 17:22:46.773661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.754 [2024-11-26 17:22:46.773937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.754 [2024-11-26 17:22:46.774068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.754 [2024-11-26 17:22:46.774147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.754 [2024-11-26 17:22:46.774160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.754 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.754 [2024-11-26 17:22:46.841632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:16.754 [2024-11-26 17:22:46.841711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.754 [2024-11-26 17:22:46.841736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:16.754 [2024-11-26 17:22:46.841749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.755 [2024-11-26 17:22:46.844511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.755 [2024-11-26 17:22:46.844772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:16.755 [2024-11-26 17:22:46.844900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:16.755 [2024-11-26 17:22:46.844971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:16.755 [2024-11-26 17:22:46.845134] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:16.755 [2024-11-26 17:22:46.845149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.755 [2024-11-26 17:22:46.845171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:16.755 [2024-11-26 17:22:46.845244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.755 pt1 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.755 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.013 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.013 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.013 "name": "raid_bdev1", 00:26:17.013 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:17.013 "strip_size_kb": 0, 00:26:17.013 "state": "configuring", 00:26:17.013 "raid_level": "raid1", 00:26:17.013 "superblock": true, 00:26:17.013 "num_base_bdevs": 3, 00:26:17.013 "num_base_bdevs_discovered": 1, 00:26:17.013 "num_base_bdevs_operational": 2, 00:26:17.013 "base_bdevs_list": [ 00:26:17.013 { 00:26:17.013 "name": null, 00:26:17.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.013 "is_configured": false, 00:26:17.013 "data_offset": 2048, 00:26:17.013 "data_size": 63488 00:26:17.013 }, 00:26:17.013 { 00:26:17.013 "name": "pt2", 00:26:17.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.013 "is_configured": true, 00:26:17.013 "data_offset": 2048, 00:26:17.013 "data_size": 63488 00:26:17.013 }, 00:26:17.013 { 00:26:17.013 "name": null, 00:26:17.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.013 "is_configured": false, 00:26:17.013 "data_offset": 2048, 00:26:17.013 "data_size": 63488 00:26:17.013 } 00:26:17.013 ] 00:26:17.013 }' 00:26:17.013 17:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.013 17:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.273 [2024-11-26 17:22:47.305719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:17.273 [2024-11-26 17:22:47.306073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.273 [2024-11-26 17:22:47.306116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:17.273 [2024-11-26 17:22:47.306131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.273 [2024-11-26 17:22:47.306791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.273 [2024-11-26 17:22:47.306825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:17.273 [2024-11-26 17:22:47.306941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:17.273 [2024-11-26 17:22:47.306970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:17.273 [2024-11-26 17:22:47.307133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:17.273 [2024-11-26 17:22:47.307145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:17.273 [2024-11-26 17:22:47.307436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:17.273 [2024-11-26 17:22:47.307630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:17.273 [2024-11-26 17:22:47.307648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:17.273 [2024-11-26 17:22:47.307812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.273 pt3 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.273 "name": "raid_bdev1", 00:26:17.273 "uuid": "0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9", 00:26:17.273 "strip_size_kb": 0, 00:26:17.273 "state": "online", 00:26:17.273 "raid_level": "raid1", 00:26:17.273 "superblock": true, 00:26:17.273 "num_base_bdevs": 3, 00:26:17.273 "num_base_bdevs_discovered": 2, 00:26:17.273 "num_base_bdevs_operational": 2, 00:26:17.273 "base_bdevs_list": [ 00:26:17.273 { 00:26:17.273 "name": null, 00:26:17.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.273 "is_configured": false, 00:26:17.273 "data_offset": 2048, 00:26:17.273 "data_size": 63488 00:26:17.273 }, 00:26:17.273 { 00:26:17.273 "name": "pt2", 00:26:17.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.273 "is_configured": true, 00:26:17.273 "data_offset": 2048, 00:26:17.273 "data_size": 63488 00:26:17.273 }, 00:26:17.273 { 00:26:17.273 "name": "pt3", 00:26:17.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.273 "is_configured": true, 00:26:17.273 "data_offset": 2048, 00:26:17.273 "data_size": 63488 00:26:17.273 } 00:26:17.273 ] 00:26:17.273 }' 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.273 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:17.858 [2024-11-26 17:22:47.826078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 '!=' 0dd2eb51-8f95-4c30-81a6-6cdc5e7162a9 ']' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68737 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68737 ']' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68737 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68737 00:26:17.858 killing process with pid 68737 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68737' 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68737 00:26:17.858 [2024-11-26 17:22:47.919608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.858 17:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68737 00:26:17.858 [2024-11-26 17:22:47.919756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.858 [2024-11-26 17:22:47.919834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.858 [2024-11-26 17:22:47.919850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:18.425 [2024-11-26 17:22:48.257260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:19.361 17:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:19.361 00:26:19.361 real 0m8.016s 00:26:19.361 user 0m12.329s 00:26:19.362 sys 0m1.731s 00:26:19.362 17:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.362 17:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.362 ************************************ 00:26:19.362 END TEST raid_superblock_test 00:26:19.362 ************************************ 00:26:19.621 17:22:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:26:19.621 17:22:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:19.621 17:22:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.621 17:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:19.621 ************************************ 00:26:19.621 START TEST raid_read_error_test 00:26:19.621 ************************************ 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eGHLDUbAC0 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69190 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69190 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69190 ']' 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.621 17:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.622 [2024-11-26 17:22:49.666120] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:19.622 [2024-11-26 17:22:49.666269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69190 ] 00:26:19.880 [2024-11-26 17:22:49.856328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.139 [2024-11-26 17:22:50.001309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.139 [2024-11-26 17:22:50.209357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:20.139 [2024-11-26 17:22:50.209398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.705 BaseBdev1_malloc 00:26:20.705 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 true 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 [2024-11-26 17:22:50.593615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:20.706 [2024-11-26 17:22:50.593689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.706 [2024-11-26 17:22:50.593716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:20.706 [2024-11-26 17:22:50.593732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.706 [2024-11-26 17:22:50.596528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.706 [2024-11-26 17:22:50.596590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:20.706 BaseBdev1 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 BaseBdev2_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 true 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 [2024-11-26 17:22:50.665795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:20.706 [2024-11-26 17:22:50.665867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.706 [2024-11-26 17:22:50.665888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:20.706 [2024-11-26 17:22:50.665903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.706 [2024-11-26 17:22:50.668707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.706 [2024-11-26 17:22:50.668747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:20.706 BaseBdev2 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 BaseBdev3_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 true 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 [2024-11-26 17:22:50.748507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:20.706 [2024-11-26 17:22:50.748591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.706 [2024-11-26 17:22:50.748615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:20.706 [2024-11-26 17:22:50.748632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.706 [2024-11-26 17:22:50.751427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.706 [2024-11-26 17:22:50.751472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:20.706 BaseBdev3 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 [2024-11-26 17:22:50.760589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:20.706 [2024-11-26 17:22:50.763016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:20.706 [2024-11-26 17:22:50.763094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:20.706 [2024-11-26 17:22:50.763300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:20.706 [2024-11-26 17:22:50.763330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:20.706 [2024-11-26 17:22:50.763628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:20.706 [2024-11-26 17:22:50.763821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:20.706 [2024-11-26 17:22:50.763843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:20.706 [2024-11-26 17:22:50.764004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.706 "name": "raid_bdev1", 00:26:20.706 "uuid": "58a19a36-f738-489c-82c8-d8b8abfb9457", 00:26:20.706 "strip_size_kb": 0, 00:26:20.706 "state": "online", 00:26:20.706 "raid_level": "raid1", 00:26:20.706 "superblock": true, 00:26:20.706 "num_base_bdevs": 3, 00:26:20.706 "num_base_bdevs_discovered": 3, 00:26:20.706 "num_base_bdevs_operational": 3, 00:26:20.706 "base_bdevs_list": [ 00:26:20.706 { 00:26:20.706 "name": "BaseBdev1", 00:26:20.706 "uuid": "94575c79-582f-5855-b960-d670c57a9cf7", 00:26:20.706 "is_configured": true, 00:26:20.706 "data_offset": 2048, 00:26:20.706 "data_size": 63488 00:26:20.706 }, 00:26:20.706 { 00:26:20.706 "name": "BaseBdev2", 00:26:20.706 "uuid": "e4f1fb7e-34cb-5a80-a763-99f107a6ead4", 00:26:20.706 "is_configured": true, 00:26:20.706 "data_offset": 2048, 00:26:20.706 "data_size": 63488 00:26:20.706 }, 00:26:20.706 { 00:26:20.706 "name": "BaseBdev3", 00:26:20.706 "uuid": "d70f4153-4fbf-5e17-8734-0884e7f8e490", 00:26:20.706 "is_configured": true, 00:26:20.706 "data_offset": 2048, 00:26:20.706 "data_size": 63488 00:26:20.706 } 00:26:20.706 ] 00:26:20.706 }' 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.706 17:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.273 17:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:21.273 17:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:21.273 [2024-11-26 17:22:51.321407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.229 "name": "raid_bdev1", 00:26:22.229 "uuid": "58a19a36-f738-489c-82c8-d8b8abfb9457", 00:26:22.229 "strip_size_kb": 0, 00:26:22.229 "state": "online", 00:26:22.229 "raid_level": "raid1", 00:26:22.229 "superblock": true, 00:26:22.229 "num_base_bdevs": 3, 00:26:22.229 "num_base_bdevs_discovered": 3, 00:26:22.229 "num_base_bdevs_operational": 3, 00:26:22.229 "base_bdevs_list": [ 00:26:22.229 { 00:26:22.229 "name": "BaseBdev1", 00:26:22.229 "uuid": "94575c79-582f-5855-b960-d670c57a9cf7", 00:26:22.229 "is_configured": true, 00:26:22.229 "data_offset": 2048, 00:26:22.229 "data_size": 63488 00:26:22.229 }, 00:26:22.229 { 00:26:22.229 "name": "BaseBdev2", 00:26:22.229 "uuid": "e4f1fb7e-34cb-5a80-a763-99f107a6ead4", 00:26:22.229 "is_configured": true, 00:26:22.229 "data_offset": 2048, 00:26:22.229 "data_size": 63488 00:26:22.229 }, 00:26:22.229 { 00:26:22.229 "name": "BaseBdev3", 00:26:22.229 "uuid": "d70f4153-4fbf-5e17-8734-0884e7f8e490", 00:26:22.229 "is_configured": true, 00:26:22.229 "data_offset": 2048, 00:26:22.229 "data_size": 63488 00:26:22.229 } 00:26:22.229 ] 00:26:22.229 }' 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.229 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.817 [2024-11-26 17:22:52.684200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.817 [2024-11-26 17:22:52.684250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.817 [2024-11-26 17:22:52.687181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.817 [2024-11-26 17:22:52.687238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.817 [2024-11-26 17:22:52.687348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.817 [2024-11-26 17:22:52.687361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:22.817 { 00:26:22.817 "results": [ 00:26:22.817 { 00:26:22.817 "job": "raid_bdev1", 00:26:22.817 "core_mask": "0x1", 00:26:22.817 "workload": "randrw", 00:26:22.817 "percentage": 50, 00:26:22.817 "status": "finished", 00:26:22.817 "queue_depth": 1, 00:26:22.817 "io_size": 131072, 00:26:22.817 "runtime": 1.362663, 00:26:22.817 "iops": 12605.46444718907, 00:26:22.817 "mibps": 1575.6830558986337, 00:26:22.817 "io_failed": 0, 00:26:22.817 "io_timeout": 0, 00:26:22.817 "avg_latency_us": 76.5086635650128, 00:26:22.817 "min_latency_us": 24.366265060240963, 00:26:22.817 "max_latency_us": 1572.6008032128514 00:26:22.817 } 00:26:22.817 ], 00:26:22.817 "core_count": 1 00:26:22.817 } 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69190 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69190 ']' 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69190 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69190 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.817 killing process with pid 69190 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69190' 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69190 00:26:22.817 [2024-11-26 17:22:52.739771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:22.817 17:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69190 00:26:23.075 [2024-11-26 17:22:52.986463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eGHLDUbAC0 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:24.453 00:26:24.453 real 0m4.780s 00:26:24.453 user 0m5.598s 00:26:24.453 sys 0m0.692s 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.453 17:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.453 ************************************ 00:26:24.453 END TEST raid_read_error_test 00:26:24.453 ************************************ 00:26:24.453 17:22:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:26:24.453 17:22:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:24.453 17:22:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.453 17:22:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:24.453 ************************************ 00:26:24.453 START TEST raid_write_error_test 00:26:24.453 ************************************ 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w5veYsn8zg 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69336 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69336 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69336 ']' 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.453 17:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.453 [2024-11-26 17:22:54.515493] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:24.453 [2024-11-26 17:22:54.515645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69336 ] 00:26:24.712 [2024-11-26 17:22:54.701325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.971 [2024-11-26 17:22:54.843697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.971 [2024-11-26 17:22:55.060037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:24.971 [2024-11-26 17:22:55.060091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 BaseBdev1_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 true 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 [2024-11-26 17:22:55.432829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:25.539 [2024-11-26 17:22:55.432949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.539 [2024-11-26 17:22:55.432977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:25.539 [2024-11-26 17:22:55.432993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.539 [2024-11-26 17:22:55.435834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.539 [2024-11-26 17:22:55.435887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:25.539 BaseBdev1 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 BaseBdev2_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 true 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 [2024-11-26 17:22:55.501337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:25.539 [2024-11-26 17:22:55.501419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.539 [2024-11-26 17:22:55.501439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:25.539 [2024-11-26 17:22:55.501453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.539 [2024-11-26 17:22:55.504041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.539 [2024-11-26 17:22:55.504083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:25.539 BaseBdev2 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 BaseBdev3_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 true 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 [2024-11-26 17:22:55.583938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:25.539 [2024-11-26 17:22:55.584005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.539 [2024-11-26 17:22:55.584026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:25.539 [2024-11-26 17:22:55.584040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.539 [2024-11-26 17:22:55.586578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.539 [2024-11-26 17:22:55.586620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:25.539 BaseBdev3 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.539 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.540 [2024-11-26 17:22:55.596002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:25.540 [2024-11-26 17:22:55.598244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:25.540 [2024-11-26 17:22:55.598325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.540 [2024-11-26 17:22:55.598548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:25.540 [2024-11-26 17:22:55.598562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:25.540 [2024-11-26 17:22:55.598823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:25.540 [2024-11-26 17:22:55.599017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:25.540 [2024-11-26 17:22:55.599035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:25.540 [2024-11-26 17:22:55.599171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.540 "name": "raid_bdev1", 00:26:25.540 "uuid": "c6aa1517-42ac-4105-999f-cf3a8a51cc16", 00:26:25.540 "strip_size_kb": 0, 00:26:25.540 "state": "online", 00:26:25.540 "raid_level": "raid1", 00:26:25.540 "superblock": true, 00:26:25.540 "num_base_bdevs": 3, 00:26:25.540 "num_base_bdevs_discovered": 3, 00:26:25.540 "num_base_bdevs_operational": 3, 00:26:25.540 "base_bdevs_list": [ 00:26:25.540 { 00:26:25.540 "name": "BaseBdev1", 00:26:25.540 "uuid": "dd21ce4a-6e12-58e5-90ed-e8e50abd6098", 00:26:25.540 "is_configured": true, 00:26:25.540 "data_offset": 2048, 00:26:25.540 "data_size": 63488 00:26:25.540 }, 00:26:25.540 { 00:26:25.540 "name": "BaseBdev2", 00:26:25.540 "uuid": "5624f106-736d-5d63-b24b-da4140877362", 00:26:25.540 "is_configured": true, 00:26:25.540 "data_offset": 2048, 00:26:25.540 "data_size": 63488 00:26:25.540 }, 00:26:25.540 { 00:26:25.540 "name": "BaseBdev3", 00:26:25.540 "uuid": "e097f37c-4a5b-5632-9a9b-537bedb40495", 00:26:25.540 "is_configured": true, 00:26:25.540 "data_offset": 2048, 00:26:25.540 "data_size": 63488 00:26:25.540 } 00:26:25.540 ] 00:26:25.540 }' 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.540 17:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.106 17:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:26.106 17:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:26.106 [2024-11-26 17:22:56.141283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.043 [2024-11-26 17:22:57.053304] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:27.043 [2024-11-26 17:22:57.053372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:27.043 [2024-11-26 17:22:57.053620] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.043 "name": "raid_bdev1", 00:26:27.043 "uuid": "c6aa1517-42ac-4105-999f-cf3a8a51cc16", 00:26:27.043 "strip_size_kb": 0, 00:26:27.043 "state": "online", 00:26:27.043 "raid_level": "raid1", 00:26:27.043 "superblock": true, 00:26:27.043 "num_base_bdevs": 3, 00:26:27.043 "num_base_bdevs_discovered": 2, 00:26:27.043 "num_base_bdevs_operational": 2, 00:26:27.043 "base_bdevs_list": [ 00:26:27.043 { 00:26:27.043 "name": null, 00:26:27.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.043 "is_configured": false, 00:26:27.043 "data_offset": 0, 00:26:27.043 "data_size": 63488 00:26:27.043 }, 00:26:27.043 { 00:26:27.043 "name": "BaseBdev2", 00:26:27.043 "uuid": "5624f106-736d-5d63-b24b-da4140877362", 00:26:27.043 "is_configured": true, 00:26:27.043 "data_offset": 2048, 00:26:27.043 "data_size": 63488 00:26:27.043 }, 00:26:27.043 { 00:26:27.043 "name": "BaseBdev3", 00:26:27.043 "uuid": "e097f37c-4a5b-5632-9a9b-537bedb40495", 00:26:27.043 "is_configured": true, 00:26:27.043 "data_offset": 2048, 00:26:27.043 "data_size": 63488 00:26:27.043 } 00:26:27.043 ] 00:26:27.043 }' 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.043 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.610 [2024-11-26 17:22:57.463489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:27.610 [2024-11-26 17:22:57.463552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.610 [2024-11-26 17:22:57.466146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.610 [2024-11-26 17:22:57.466218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.610 [2024-11-26 17:22:57.466304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.610 [2024-11-26 17:22:57.466321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:27.610 { 00:26:27.610 "results": [ 00:26:27.610 { 00:26:27.610 "job": "raid_bdev1", 00:26:27.610 "core_mask": "0x1", 00:26:27.610 "workload": "randrw", 00:26:27.610 "percentage": 50, 00:26:27.610 "status": "finished", 00:26:27.610 "queue_depth": 1, 00:26:27.610 "io_size": 131072, 00:26:27.610 "runtime": 1.322097, 00:26:27.610 "iops": 14380.941791714224, 00:26:27.610 "mibps": 1797.617723964278, 00:26:27.610 "io_failed": 0, 00:26:27.610 "io_timeout": 0, 00:26:27.610 "avg_latency_us": 66.91760771587903, 00:26:27.610 "min_latency_us": 23.955020080321287, 00:26:27.610 "max_latency_us": 1408.1028112449799 00:26:27.610 } 00:26:27.610 ], 00:26:27.610 "core_count": 1 00:26:27.610 } 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69336 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69336 ']' 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69336 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69336 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69336' 00:26:27.610 killing process with pid 69336 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69336 00:26:27.610 [2024-11-26 17:22:57.511644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:27.610 17:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69336 00:26:27.868 [2024-11-26 17:22:57.753693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w5veYsn8zg 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:29.242 00:26:29.242 real 0m4.707s 00:26:29.242 user 0m5.438s 00:26:29.242 sys 0m0.702s 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.242 17:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.242 ************************************ 00:26:29.242 END TEST raid_write_error_test 00:26:29.242 ************************************ 00:26:29.242 17:22:59 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:26:29.242 17:22:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:29.242 17:22:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:26:29.242 17:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:29.242 17:22:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.242 17:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:29.242 ************************************ 00:26:29.242 START TEST raid_state_function_test 00:26:29.242 ************************************ 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69474 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69474' 00:26:29.242 Process raid pid: 69474 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69474 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69474 ']' 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.242 17:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.242 [2024-11-26 17:22:59.301916] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:29.242 [2024-11-26 17:22:59.302118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.501 [2024-11-26 17:22:59.490068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.759 [2024-11-26 17:22:59.647616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.019 [2024-11-26 17:22:59.892214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.019 [2024-11-26 17:22:59.892280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.278 [2024-11-26 17:23:00.189090] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:30.278 [2024-11-26 17:23:00.189181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:30.278 [2024-11-26 17:23:00.189195] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:30.278 [2024-11-26 17:23:00.189211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:30.278 [2024-11-26 17:23:00.189219] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:30.278 [2024-11-26 17:23:00.189233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:30.278 [2024-11-26 17:23:00.189242] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:30.278 [2024-11-26 17:23:00.189255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.278 "name": "Existed_Raid", 00:26:30.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.278 "strip_size_kb": 64, 00:26:30.278 "state": "configuring", 00:26:30.278 "raid_level": "raid0", 00:26:30.278 "superblock": false, 00:26:30.278 "num_base_bdevs": 4, 00:26:30.278 "num_base_bdevs_discovered": 0, 00:26:30.278 "num_base_bdevs_operational": 4, 00:26:30.278 "base_bdevs_list": [ 00:26:30.278 { 00:26:30.278 "name": "BaseBdev1", 00:26:30.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.278 "is_configured": false, 00:26:30.278 "data_offset": 0, 00:26:30.278 "data_size": 0 00:26:30.278 }, 00:26:30.278 { 00:26:30.278 "name": "BaseBdev2", 00:26:30.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.278 "is_configured": false, 00:26:30.278 "data_offset": 0, 00:26:30.278 "data_size": 0 00:26:30.278 }, 00:26:30.278 { 00:26:30.278 "name": "BaseBdev3", 00:26:30.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.278 "is_configured": false, 00:26:30.278 "data_offset": 0, 00:26:30.278 "data_size": 0 00:26:30.278 }, 00:26:30.278 { 00:26:30.278 "name": "BaseBdev4", 00:26:30.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.278 "is_configured": false, 00:26:30.278 "data_offset": 0, 00:26:30.278 "data_size": 0 00:26:30.278 } 00:26:30.278 ] 00:26:30.278 }' 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.278 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.537 [2024-11-26 17:23:00.644441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:30.537 [2024-11-26 17:23:00.644504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.537 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.796 [2024-11-26 17:23:00.656440] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:30.796 [2024-11-26 17:23:00.656505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:30.796 [2024-11-26 17:23:00.656531] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:30.796 [2024-11-26 17:23:00.656546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:30.796 [2024-11-26 17:23:00.656555] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:30.796 [2024-11-26 17:23:00.656569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:30.796 [2024-11-26 17:23:00.656577] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:30.796 [2024-11-26 17:23:00.656590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.796 [2024-11-26 17:23:00.706755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:30.796 BaseBdev1 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.796 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.796 [ 00:26:30.796 { 00:26:30.796 "name": "BaseBdev1", 00:26:30.796 "aliases": [ 00:26:30.796 "be93dc93-c90d-4769-a121-d9127a05cc0b" 00:26:30.796 ], 00:26:30.796 "product_name": "Malloc disk", 00:26:30.796 "block_size": 512, 00:26:30.796 "num_blocks": 65536, 00:26:30.796 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:30.796 "assigned_rate_limits": { 00:26:30.796 "rw_ios_per_sec": 0, 00:26:30.796 "rw_mbytes_per_sec": 0, 00:26:30.796 "r_mbytes_per_sec": 0, 00:26:30.796 "w_mbytes_per_sec": 0 00:26:30.796 }, 00:26:30.796 "claimed": true, 00:26:30.796 "claim_type": "exclusive_write", 00:26:30.796 "zoned": false, 00:26:30.796 "supported_io_types": { 00:26:30.796 "read": true, 00:26:30.797 "write": true, 00:26:30.797 "unmap": true, 00:26:30.797 "flush": true, 00:26:30.797 "reset": true, 00:26:30.797 "nvme_admin": false, 00:26:30.797 "nvme_io": false, 00:26:30.797 "nvme_io_md": false, 00:26:30.797 "write_zeroes": true, 00:26:30.797 "zcopy": true, 00:26:30.797 "get_zone_info": false, 00:26:30.797 "zone_management": false, 00:26:30.797 "zone_append": false, 00:26:30.797 "compare": false, 00:26:30.797 "compare_and_write": false, 00:26:30.797 "abort": true, 00:26:30.797 "seek_hole": false, 00:26:30.797 "seek_data": false, 00:26:30.797 "copy": true, 00:26:30.797 "nvme_iov_md": false 00:26:30.797 }, 00:26:30.797 "memory_domains": [ 00:26:30.797 { 00:26:30.797 "dma_device_id": "system", 00:26:30.797 "dma_device_type": 1 00:26:30.797 }, 00:26:30.797 { 00:26:30.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.797 "dma_device_type": 2 00:26:30.797 } 00:26:30.797 ], 00:26:30.797 "driver_specific": {} 00:26:30.797 } 00:26:30.797 ] 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.797 "name": "Existed_Raid", 00:26:30.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.797 "strip_size_kb": 64, 00:26:30.797 "state": "configuring", 00:26:30.797 "raid_level": "raid0", 00:26:30.797 "superblock": false, 00:26:30.797 "num_base_bdevs": 4, 00:26:30.797 "num_base_bdevs_discovered": 1, 00:26:30.797 "num_base_bdevs_operational": 4, 00:26:30.797 "base_bdevs_list": [ 00:26:30.797 { 00:26:30.797 "name": "BaseBdev1", 00:26:30.797 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:30.797 "is_configured": true, 00:26:30.797 "data_offset": 0, 00:26:30.797 "data_size": 65536 00:26:30.797 }, 00:26:30.797 { 00:26:30.797 "name": "BaseBdev2", 00:26:30.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.797 "is_configured": false, 00:26:30.797 "data_offset": 0, 00:26:30.797 "data_size": 0 00:26:30.797 }, 00:26:30.797 { 00:26:30.797 "name": "BaseBdev3", 00:26:30.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.797 "is_configured": false, 00:26:30.797 "data_offset": 0, 00:26:30.797 "data_size": 0 00:26:30.797 }, 00:26:30.797 { 00:26:30.797 "name": "BaseBdev4", 00:26:30.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.797 "is_configured": false, 00:26:30.797 "data_offset": 0, 00:26:30.797 "data_size": 0 00:26:30.797 } 00:26:30.797 ] 00:26:30.797 }' 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.797 17:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.365 [2024-11-26 17:23:01.194175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:31.365 [2024-11-26 17:23:01.194254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.365 [2024-11-26 17:23:01.206240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:31.365 [2024-11-26 17:23:01.208789] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:31.365 [2024-11-26 17:23:01.208845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:31.365 [2024-11-26 17:23:01.208858] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:31.365 [2024-11-26 17:23:01.208876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:31.365 [2024-11-26 17:23:01.208885] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:31.365 [2024-11-26 17:23:01.208898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.365 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.365 "name": "Existed_Raid", 00:26:31.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.365 "strip_size_kb": 64, 00:26:31.365 "state": "configuring", 00:26:31.365 "raid_level": "raid0", 00:26:31.365 "superblock": false, 00:26:31.365 "num_base_bdevs": 4, 00:26:31.365 "num_base_bdevs_discovered": 1, 00:26:31.365 "num_base_bdevs_operational": 4, 00:26:31.365 "base_bdevs_list": [ 00:26:31.365 { 00:26:31.365 "name": "BaseBdev1", 00:26:31.365 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:31.365 "is_configured": true, 00:26:31.365 "data_offset": 0, 00:26:31.365 "data_size": 65536 00:26:31.365 }, 00:26:31.365 { 00:26:31.365 "name": "BaseBdev2", 00:26:31.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.365 "is_configured": false, 00:26:31.365 "data_offset": 0, 00:26:31.365 "data_size": 0 00:26:31.365 }, 00:26:31.365 { 00:26:31.365 "name": "BaseBdev3", 00:26:31.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.365 "is_configured": false, 00:26:31.365 "data_offset": 0, 00:26:31.365 "data_size": 0 00:26:31.365 }, 00:26:31.365 { 00:26:31.365 "name": "BaseBdev4", 00:26:31.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.365 "is_configured": false, 00:26:31.365 "data_offset": 0, 00:26:31.365 "data_size": 0 00:26:31.365 } 00:26:31.366 ] 00:26:31.366 }' 00:26:31.366 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.366 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.625 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 [2024-11-26 17:23:01.698954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:31.626 BaseBdev2 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.626 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 [ 00:26:31.626 { 00:26:31.626 "name": "BaseBdev2", 00:26:31.626 "aliases": [ 00:26:31.626 "0fd2709f-2043-4c93-af22-023aac6a4570" 00:26:31.626 ], 00:26:31.626 "product_name": "Malloc disk", 00:26:31.626 "block_size": 512, 00:26:31.626 "num_blocks": 65536, 00:26:31.626 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:31.626 "assigned_rate_limits": { 00:26:31.626 "rw_ios_per_sec": 0, 00:26:31.626 "rw_mbytes_per_sec": 0, 00:26:31.626 "r_mbytes_per_sec": 0, 00:26:31.626 "w_mbytes_per_sec": 0 00:26:31.626 }, 00:26:31.626 "claimed": true, 00:26:31.626 "claim_type": "exclusive_write", 00:26:31.626 "zoned": false, 00:26:31.626 "supported_io_types": { 00:26:31.626 "read": true, 00:26:31.626 "write": true, 00:26:31.884 "unmap": true, 00:26:31.884 "flush": true, 00:26:31.884 "reset": true, 00:26:31.884 "nvme_admin": false, 00:26:31.884 "nvme_io": false, 00:26:31.884 "nvme_io_md": false, 00:26:31.884 "write_zeroes": true, 00:26:31.884 "zcopy": true, 00:26:31.884 "get_zone_info": false, 00:26:31.884 "zone_management": false, 00:26:31.884 "zone_append": false, 00:26:31.884 "compare": false, 00:26:31.884 "compare_and_write": false, 00:26:31.884 "abort": true, 00:26:31.884 "seek_hole": false, 00:26:31.884 "seek_data": false, 00:26:31.884 "copy": true, 00:26:31.884 "nvme_iov_md": false 00:26:31.884 }, 00:26:31.884 "memory_domains": [ 00:26:31.884 { 00:26:31.884 "dma_device_id": "system", 00:26:31.884 "dma_device_type": 1 00:26:31.884 }, 00:26:31.884 { 00:26:31.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.884 "dma_device_type": 2 00:26:31.884 } 00:26:31.884 ], 00:26:31.884 "driver_specific": {} 00:26:31.884 } 00:26:31.884 ] 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:31.884 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.885 "name": "Existed_Raid", 00:26:31.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.885 "strip_size_kb": 64, 00:26:31.885 "state": "configuring", 00:26:31.885 "raid_level": "raid0", 00:26:31.885 "superblock": false, 00:26:31.885 "num_base_bdevs": 4, 00:26:31.885 "num_base_bdevs_discovered": 2, 00:26:31.885 "num_base_bdevs_operational": 4, 00:26:31.885 "base_bdevs_list": [ 00:26:31.885 { 00:26:31.885 "name": "BaseBdev1", 00:26:31.885 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:31.885 "is_configured": true, 00:26:31.885 "data_offset": 0, 00:26:31.885 "data_size": 65536 00:26:31.885 }, 00:26:31.885 { 00:26:31.885 "name": "BaseBdev2", 00:26:31.885 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:31.885 "is_configured": true, 00:26:31.885 "data_offset": 0, 00:26:31.885 "data_size": 65536 00:26:31.885 }, 00:26:31.885 { 00:26:31.885 "name": "BaseBdev3", 00:26:31.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.885 "is_configured": false, 00:26:31.885 "data_offset": 0, 00:26:31.885 "data_size": 0 00:26:31.885 }, 00:26:31.885 { 00:26:31.885 "name": "BaseBdev4", 00:26:31.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.885 "is_configured": false, 00:26:31.885 "data_offset": 0, 00:26:31.885 "data_size": 0 00:26:31.885 } 00:26:31.885 ] 00:26:31.885 }' 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.885 17:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.144 [2024-11-26 17:23:02.246222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.144 BaseBdev3 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.144 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.403 [ 00:26:32.403 { 00:26:32.403 "name": "BaseBdev3", 00:26:32.403 "aliases": [ 00:26:32.403 "daefedba-4abe-4291-b27c-fc6ec324b074" 00:26:32.403 ], 00:26:32.403 "product_name": "Malloc disk", 00:26:32.403 "block_size": 512, 00:26:32.403 "num_blocks": 65536, 00:26:32.403 "uuid": "daefedba-4abe-4291-b27c-fc6ec324b074", 00:26:32.403 "assigned_rate_limits": { 00:26:32.403 "rw_ios_per_sec": 0, 00:26:32.403 "rw_mbytes_per_sec": 0, 00:26:32.403 "r_mbytes_per_sec": 0, 00:26:32.403 "w_mbytes_per_sec": 0 00:26:32.403 }, 00:26:32.403 "claimed": true, 00:26:32.403 "claim_type": "exclusive_write", 00:26:32.403 "zoned": false, 00:26:32.403 "supported_io_types": { 00:26:32.403 "read": true, 00:26:32.403 "write": true, 00:26:32.403 "unmap": true, 00:26:32.403 "flush": true, 00:26:32.403 "reset": true, 00:26:32.403 "nvme_admin": false, 00:26:32.403 "nvme_io": false, 00:26:32.403 "nvme_io_md": false, 00:26:32.403 "write_zeroes": true, 00:26:32.403 "zcopy": true, 00:26:32.403 "get_zone_info": false, 00:26:32.403 "zone_management": false, 00:26:32.403 "zone_append": false, 00:26:32.403 "compare": false, 00:26:32.403 "compare_and_write": false, 00:26:32.403 "abort": true, 00:26:32.403 "seek_hole": false, 00:26:32.403 "seek_data": false, 00:26:32.403 "copy": true, 00:26:32.403 "nvme_iov_md": false 00:26:32.403 }, 00:26:32.403 "memory_domains": [ 00:26:32.403 { 00:26:32.403 "dma_device_id": "system", 00:26:32.403 "dma_device_type": 1 00:26:32.403 }, 00:26:32.403 { 00:26:32.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.403 "dma_device_type": 2 00:26:32.403 } 00:26:32.403 ], 00:26:32.403 "driver_specific": {} 00:26:32.403 } 00:26:32.403 ] 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:32.403 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.404 "name": "Existed_Raid", 00:26:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.404 "strip_size_kb": 64, 00:26:32.404 "state": "configuring", 00:26:32.404 "raid_level": "raid0", 00:26:32.404 "superblock": false, 00:26:32.404 "num_base_bdevs": 4, 00:26:32.404 "num_base_bdevs_discovered": 3, 00:26:32.404 "num_base_bdevs_operational": 4, 00:26:32.404 "base_bdevs_list": [ 00:26:32.404 { 00:26:32.404 "name": "BaseBdev1", 00:26:32.404 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:32.404 "is_configured": true, 00:26:32.404 "data_offset": 0, 00:26:32.404 "data_size": 65536 00:26:32.404 }, 00:26:32.404 { 00:26:32.404 "name": "BaseBdev2", 00:26:32.404 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:32.404 "is_configured": true, 00:26:32.404 "data_offset": 0, 00:26:32.404 "data_size": 65536 00:26:32.404 }, 00:26:32.404 { 00:26:32.404 "name": "BaseBdev3", 00:26:32.404 "uuid": "daefedba-4abe-4291-b27c-fc6ec324b074", 00:26:32.404 "is_configured": true, 00:26:32.404 "data_offset": 0, 00:26:32.404 "data_size": 65536 00:26:32.404 }, 00:26:32.404 { 00:26:32.404 "name": "BaseBdev4", 00:26:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.404 "is_configured": false, 00:26:32.404 "data_offset": 0, 00:26:32.404 "data_size": 0 00:26:32.404 } 00:26:32.404 ] 00:26:32.404 }' 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.404 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.662 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:32.662 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.662 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.921 [2024-11-26 17:23:02.792664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:32.921 [2024-11-26 17:23:02.792729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:32.921 [2024-11-26 17:23:02.792742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:32.921 [2024-11-26 17:23:02.793083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:32.921 [2024-11-26 17:23:02.793277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:32.921 [2024-11-26 17:23:02.793303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:32.921 [2024-11-26 17:23:02.793676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.921 BaseBdev4 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:32.921 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.922 [ 00:26:32.922 { 00:26:32.922 "name": "BaseBdev4", 00:26:32.922 "aliases": [ 00:26:32.922 "62f5ca95-e05d-459e-bd94-e096e00418b6" 00:26:32.922 ], 00:26:32.922 "product_name": "Malloc disk", 00:26:32.922 "block_size": 512, 00:26:32.922 "num_blocks": 65536, 00:26:32.922 "uuid": "62f5ca95-e05d-459e-bd94-e096e00418b6", 00:26:32.922 "assigned_rate_limits": { 00:26:32.922 "rw_ios_per_sec": 0, 00:26:32.922 "rw_mbytes_per_sec": 0, 00:26:32.922 "r_mbytes_per_sec": 0, 00:26:32.922 "w_mbytes_per_sec": 0 00:26:32.922 }, 00:26:32.922 "claimed": true, 00:26:32.922 "claim_type": "exclusive_write", 00:26:32.922 "zoned": false, 00:26:32.922 "supported_io_types": { 00:26:32.922 "read": true, 00:26:32.922 "write": true, 00:26:32.922 "unmap": true, 00:26:32.922 "flush": true, 00:26:32.922 "reset": true, 00:26:32.922 "nvme_admin": false, 00:26:32.922 "nvme_io": false, 00:26:32.922 "nvme_io_md": false, 00:26:32.922 "write_zeroes": true, 00:26:32.922 "zcopy": true, 00:26:32.922 "get_zone_info": false, 00:26:32.922 "zone_management": false, 00:26:32.922 "zone_append": false, 00:26:32.922 "compare": false, 00:26:32.922 "compare_and_write": false, 00:26:32.922 "abort": true, 00:26:32.922 "seek_hole": false, 00:26:32.922 "seek_data": false, 00:26:32.922 "copy": true, 00:26:32.922 "nvme_iov_md": false 00:26:32.922 }, 00:26:32.922 "memory_domains": [ 00:26:32.922 { 00:26:32.922 "dma_device_id": "system", 00:26:32.922 "dma_device_type": 1 00:26:32.922 }, 00:26:32.922 { 00:26:32.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.922 "dma_device_type": 2 00:26:32.922 } 00:26:32.922 ], 00:26:32.922 "driver_specific": {} 00:26:32.922 } 00:26:32.922 ] 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.922 "name": "Existed_Raid", 00:26:32.922 "uuid": "2a89819b-5a13-4fd9-88e5-dea16f39bef1", 00:26:32.922 "strip_size_kb": 64, 00:26:32.922 "state": "online", 00:26:32.922 "raid_level": "raid0", 00:26:32.922 "superblock": false, 00:26:32.922 "num_base_bdevs": 4, 00:26:32.922 "num_base_bdevs_discovered": 4, 00:26:32.922 "num_base_bdevs_operational": 4, 00:26:32.922 "base_bdevs_list": [ 00:26:32.922 { 00:26:32.922 "name": "BaseBdev1", 00:26:32.922 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:32.922 "is_configured": true, 00:26:32.922 "data_offset": 0, 00:26:32.922 "data_size": 65536 00:26:32.922 }, 00:26:32.922 { 00:26:32.922 "name": "BaseBdev2", 00:26:32.922 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:32.922 "is_configured": true, 00:26:32.922 "data_offset": 0, 00:26:32.922 "data_size": 65536 00:26:32.922 }, 00:26:32.922 { 00:26:32.922 "name": "BaseBdev3", 00:26:32.922 "uuid": "daefedba-4abe-4291-b27c-fc6ec324b074", 00:26:32.922 "is_configured": true, 00:26:32.922 "data_offset": 0, 00:26:32.922 "data_size": 65536 00:26:32.922 }, 00:26:32.922 { 00:26:32.922 "name": "BaseBdev4", 00:26:32.922 "uuid": "62f5ca95-e05d-459e-bd94-e096e00418b6", 00:26:32.922 "is_configured": true, 00:26:32.922 "data_offset": 0, 00:26:32.922 "data_size": 65536 00:26:32.922 } 00:26:32.922 ] 00:26:32.922 }' 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.922 17:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.490 [2024-11-26 17:23:03.320439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.490 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:33.490 "name": "Existed_Raid", 00:26:33.490 "aliases": [ 00:26:33.490 "2a89819b-5a13-4fd9-88e5-dea16f39bef1" 00:26:33.490 ], 00:26:33.490 "product_name": "Raid Volume", 00:26:33.491 "block_size": 512, 00:26:33.491 "num_blocks": 262144, 00:26:33.491 "uuid": "2a89819b-5a13-4fd9-88e5-dea16f39bef1", 00:26:33.491 "assigned_rate_limits": { 00:26:33.491 "rw_ios_per_sec": 0, 00:26:33.491 "rw_mbytes_per_sec": 0, 00:26:33.491 "r_mbytes_per_sec": 0, 00:26:33.491 "w_mbytes_per_sec": 0 00:26:33.491 }, 00:26:33.491 "claimed": false, 00:26:33.491 "zoned": false, 00:26:33.491 "supported_io_types": { 00:26:33.491 "read": true, 00:26:33.491 "write": true, 00:26:33.491 "unmap": true, 00:26:33.491 "flush": true, 00:26:33.491 "reset": true, 00:26:33.491 "nvme_admin": false, 00:26:33.491 "nvme_io": false, 00:26:33.491 "nvme_io_md": false, 00:26:33.491 "write_zeroes": true, 00:26:33.491 "zcopy": false, 00:26:33.491 "get_zone_info": false, 00:26:33.491 "zone_management": false, 00:26:33.491 "zone_append": false, 00:26:33.491 "compare": false, 00:26:33.491 "compare_and_write": false, 00:26:33.491 "abort": false, 00:26:33.491 "seek_hole": false, 00:26:33.491 "seek_data": false, 00:26:33.491 "copy": false, 00:26:33.491 "nvme_iov_md": false 00:26:33.491 }, 00:26:33.491 "memory_domains": [ 00:26:33.491 { 00:26:33.491 "dma_device_id": "system", 00:26:33.491 "dma_device_type": 1 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.491 "dma_device_type": 2 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "system", 00:26:33.491 "dma_device_type": 1 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.491 "dma_device_type": 2 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "system", 00:26:33.491 "dma_device_type": 1 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.491 "dma_device_type": 2 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "system", 00:26:33.491 "dma_device_type": 1 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.491 "dma_device_type": 2 00:26:33.491 } 00:26:33.491 ], 00:26:33.491 "driver_specific": { 00:26:33.491 "raid": { 00:26:33.491 "uuid": "2a89819b-5a13-4fd9-88e5-dea16f39bef1", 00:26:33.491 "strip_size_kb": 64, 00:26:33.491 "state": "online", 00:26:33.491 "raid_level": "raid0", 00:26:33.491 "superblock": false, 00:26:33.491 "num_base_bdevs": 4, 00:26:33.491 "num_base_bdevs_discovered": 4, 00:26:33.491 "num_base_bdevs_operational": 4, 00:26:33.491 "base_bdevs_list": [ 00:26:33.491 { 00:26:33.491 "name": "BaseBdev1", 00:26:33.491 "uuid": "be93dc93-c90d-4769-a121-d9127a05cc0b", 00:26:33.491 "is_configured": true, 00:26:33.491 "data_offset": 0, 00:26:33.491 "data_size": 65536 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "name": "BaseBdev2", 00:26:33.491 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:33.491 "is_configured": true, 00:26:33.491 "data_offset": 0, 00:26:33.491 "data_size": 65536 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "name": "BaseBdev3", 00:26:33.491 "uuid": "daefedba-4abe-4291-b27c-fc6ec324b074", 00:26:33.491 "is_configured": true, 00:26:33.491 "data_offset": 0, 00:26:33.491 "data_size": 65536 00:26:33.491 }, 00:26:33.491 { 00:26:33.491 "name": "BaseBdev4", 00:26:33.491 "uuid": "62f5ca95-e05d-459e-bd94-e096e00418b6", 00:26:33.491 "is_configured": true, 00:26:33.491 "data_offset": 0, 00:26:33.491 "data_size": 65536 00:26:33.491 } 00:26:33.491 ] 00:26:33.491 } 00:26:33.491 } 00:26:33.491 }' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:33.491 BaseBdev2 00:26:33.491 BaseBdev3 00:26:33.491 BaseBdev4' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.491 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.751 [2024-11-26 17:23:03.655725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:33.751 [2024-11-26 17:23:03.655770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:33.751 [2024-11-26 17:23:03.655837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.751 "name": "Existed_Raid", 00:26:33.751 "uuid": "2a89819b-5a13-4fd9-88e5-dea16f39bef1", 00:26:33.751 "strip_size_kb": 64, 00:26:33.751 "state": "offline", 00:26:33.751 "raid_level": "raid0", 00:26:33.751 "superblock": false, 00:26:33.751 "num_base_bdevs": 4, 00:26:33.751 "num_base_bdevs_discovered": 3, 00:26:33.751 "num_base_bdevs_operational": 3, 00:26:33.751 "base_bdevs_list": [ 00:26:33.751 { 00:26:33.751 "name": null, 00:26:33.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.751 "is_configured": false, 00:26:33.751 "data_offset": 0, 00:26:33.751 "data_size": 65536 00:26:33.751 }, 00:26:33.751 { 00:26:33.751 "name": "BaseBdev2", 00:26:33.751 "uuid": "0fd2709f-2043-4c93-af22-023aac6a4570", 00:26:33.751 "is_configured": true, 00:26:33.751 "data_offset": 0, 00:26:33.751 "data_size": 65536 00:26:33.751 }, 00:26:33.751 { 00:26:33.751 "name": "BaseBdev3", 00:26:33.751 "uuid": "daefedba-4abe-4291-b27c-fc6ec324b074", 00:26:33.751 "is_configured": true, 00:26:33.751 "data_offset": 0, 00:26:33.751 "data_size": 65536 00:26:33.751 }, 00:26:33.751 { 00:26:33.751 "name": "BaseBdev4", 00:26:33.751 "uuid": "62f5ca95-e05d-459e-bd94-e096e00418b6", 00:26:33.751 "is_configured": true, 00:26:33.751 "data_offset": 0, 00:26:33.751 "data_size": 65536 00:26:33.751 } 00:26:33.751 ] 00:26:33.751 }' 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.751 17:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.319 [2024-11-26 17:23:04.249850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.319 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.319 [2024-11-26 17:23:04.415811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.579 [2024-11-26 17:23:04.569575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:34.579 [2024-11-26 17:23:04.569643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.579 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.841 BaseBdev2 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.841 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.841 [ 00:26:34.841 { 00:26:34.841 "name": "BaseBdev2", 00:26:34.841 "aliases": [ 00:26:34.841 "e62e4234-d9a0-4ada-8580-b21a1958578a" 00:26:34.841 ], 00:26:34.841 "product_name": "Malloc disk", 00:26:34.841 "block_size": 512, 00:26:34.841 "num_blocks": 65536, 00:26:34.841 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:34.841 "assigned_rate_limits": { 00:26:34.841 "rw_ios_per_sec": 0, 00:26:34.841 "rw_mbytes_per_sec": 0, 00:26:34.841 "r_mbytes_per_sec": 0, 00:26:34.841 "w_mbytes_per_sec": 0 00:26:34.841 }, 00:26:34.841 "claimed": false, 00:26:34.841 "zoned": false, 00:26:34.841 "supported_io_types": { 00:26:34.841 "read": true, 00:26:34.841 "write": true, 00:26:34.841 "unmap": true, 00:26:34.841 "flush": true, 00:26:34.841 "reset": true, 00:26:34.841 "nvme_admin": false, 00:26:34.841 "nvme_io": false, 00:26:34.841 "nvme_io_md": false, 00:26:34.841 "write_zeroes": true, 00:26:34.841 "zcopy": true, 00:26:34.841 "get_zone_info": false, 00:26:34.841 "zone_management": false, 00:26:34.841 "zone_append": false, 00:26:34.841 "compare": false, 00:26:34.841 "compare_and_write": false, 00:26:34.841 "abort": true, 00:26:34.841 "seek_hole": false, 00:26:34.841 "seek_data": false, 00:26:34.841 "copy": true, 00:26:34.841 "nvme_iov_md": false 00:26:34.841 }, 00:26:34.841 "memory_domains": [ 00:26:34.841 { 00:26:34.841 "dma_device_id": "system", 00:26:34.842 "dma_device_type": 1 00:26:34.842 }, 00:26:34.842 { 00:26:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.842 "dma_device_type": 2 00:26:34.842 } 00:26:34.842 ], 00:26:34.842 "driver_specific": {} 00:26:34.842 } 00:26:34.842 ] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.842 BaseBdev3 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.842 [ 00:26:34.842 { 00:26:34.842 "name": "BaseBdev3", 00:26:34.842 "aliases": [ 00:26:34.842 "a109fb52-da52-42c8-b206-90c03e484395" 00:26:34.842 ], 00:26:34.842 "product_name": "Malloc disk", 00:26:34.842 "block_size": 512, 00:26:34.842 "num_blocks": 65536, 00:26:34.842 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:34.842 "assigned_rate_limits": { 00:26:34.842 "rw_ios_per_sec": 0, 00:26:34.842 "rw_mbytes_per_sec": 0, 00:26:34.842 "r_mbytes_per_sec": 0, 00:26:34.842 "w_mbytes_per_sec": 0 00:26:34.842 }, 00:26:34.842 "claimed": false, 00:26:34.842 "zoned": false, 00:26:34.842 "supported_io_types": { 00:26:34.842 "read": true, 00:26:34.842 "write": true, 00:26:34.842 "unmap": true, 00:26:34.842 "flush": true, 00:26:34.842 "reset": true, 00:26:34.842 "nvme_admin": false, 00:26:34.842 "nvme_io": false, 00:26:34.842 "nvme_io_md": false, 00:26:34.842 "write_zeroes": true, 00:26:34.842 "zcopy": true, 00:26:34.842 "get_zone_info": false, 00:26:34.842 "zone_management": false, 00:26:34.842 "zone_append": false, 00:26:34.842 "compare": false, 00:26:34.842 "compare_and_write": false, 00:26:34.842 "abort": true, 00:26:34.842 "seek_hole": false, 00:26:34.842 "seek_data": false, 00:26:34.842 "copy": true, 00:26:34.842 "nvme_iov_md": false 00:26:34.842 }, 00:26:34.842 "memory_domains": [ 00:26:34.842 { 00:26:34.842 "dma_device_id": "system", 00:26:34.842 "dma_device_type": 1 00:26:34.842 }, 00:26:34.842 { 00:26:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.842 "dma_device_type": 2 00:26:34.842 } 00:26:34.842 ], 00:26:34.842 "driver_specific": {} 00:26:34.842 } 00:26:34.842 ] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.842 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.106 BaseBdev4 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:35.106 17:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.106 [ 00:26:35.106 { 00:26:35.106 "name": "BaseBdev4", 00:26:35.106 "aliases": [ 00:26:35.106 "a88a8abb-6a5f-416f-bab0-90b8a53689e0" 00:26:35.106 ], 00:26:35.106 "product_name": "Malloc disk", 00:26:35.106 "block_size": 512, 00:26:35.106 "num_blocks": 65536, 00:26:35.106 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:35.106 "assigned_rate_limits": { 00:26:35.106 "rw_ios_per_sec": 0, 00:26:35.106 "rw_mbytes_per_sec": 0, 00:26:35.106 "r_mbytes_per_sec": 0, 00:26:35.106 "w_mbytes_per_sec": 0 00:26:35.106 }, 00:26:35.106 "claimed": false, 00:26:35.106 "zoned": false, 00:26:35.106 "supported_io_types": { 00:26:35.106 "read": true, 00:26:35.106 "write": true, 00:26:35.106 "unmap": true, 00:26:35.106 "flush": true, 00:26:35.106 "reset": true, 00:26:35.106 "nvme_admin": false, 00:26:35.106 "nvme_io": false, 00:26:35.106 "nvme_io_md": false, 00:26:35.106 "write_zeroes": true, 00:26:35.106 "zcopy": true, 00:26:35.106 "get_zone_info": false, 00:26:35.106 "zone_management": false, 00:26:35.106 "zone_append": false, 00:26:35.106 "compare": false, 00:26:35.106 "compare_and_write": false, 00:26:35.106 "abort": true, 00:26:35.106 "seek_hole": false, 00:26:35.106 "seek_data": false, 00:26:35.106 "copy": true, 00:26:35.106 "nvme_iov_md": false 00:26:35.106 }, 00:26:35.106 "memory_domains": [ 00:26:35.106 { 00:26:35.106 "dma_device_id": "system", 00:26:35.106 "dma_device_type": 1 00:26:35.106 }, 00:26:35.106 { 00:26:35.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.106 "dma_device_type": 2 00:26:35.106 } 00:26:35.106 ], 00:26:35.106 "driver_specific": {} 00:26:35.106 } 00:26:35.106 ] 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.106 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.106 [2024-11-26 17:23:05.056010] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:35.107 [2024-11-26 17:23:05.056068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:35.107 [2024-11-26 17:23:05.056112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:35.107 [2024-11-26 17:23:05.058607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:35.107 [2024-11-26 17:23:05.058682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.107 "name": "Existed_Raid", 00:26:35.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.107 "strip_size_kb": 64, 00:26:35.107 "state": "configuring", 00:26:35.107 "raid_level": "raid0", 00:26:35.107 "superblock": false, 00:26:35.107 "num_base_bdevs": 4, 00:26:35.107 "num_base_bdevs_discovered": 3, 00:26:35.107 "num_base_bdevs_operational": 4, 00:26:35.107 "base_bdevs_list": [ 00:26:35.107 { 00:26:35.107 "name": "BaseBdev1", 00:26:35.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.107 "is_configured": false, 00:26:35.107 "data_offset": 0, 00:26:35.107 "data_size": 0 00:26:35.107 }, 00:26:35.107 { 00:26:35.107 "name": "BaseBdev2", 00:26:35.107 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:35.107 "is_configured": true, 00:26:35.107 "data_offset": 0, 00:26:35.107 "data_size": 65536 00:26:35.107 }, 00:26:35.107 { 00:26:35.107 "name": "BaseBdev3", 00:26:35.107 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:35.107 "is_configured": true, 00:26:35.107 "data_offset": 0, 00:26:35.107 "data_size": 65536 00:26:35.107 }, 00:26:35.107 { 00:26:35.107 "name": "BaseBdev4", 00:26:35.107 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:35.107 "is_configured": true, 00:26:35.107 "data_offset": 0, 00:26:35.107 "data_size": 65536 00:26:35.107 } 00:26:35.107 ] 00:26:35.107 }' 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.107 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.680 [2024-11-26 17:23:05.531374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.680 "name": "Existed_Raid", 00:26:35.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.680 "strip_size_kb": 64, 00:26:35.680 "state": "configuring", 00:26:35.680 "raid_level": "raid0", 00:26:35.680 "superblock": false, 00:26:35.680 "num_base_bdevs": 4, 00:26:35.680 "num_base_bdevs_discovered": 2, 00:26:35.680 "num_base_bdevs_operational": 4, 00:26:35.680 "base_bdevs_list": [ 00:26:35.680 { 00:26:35.680 "name": "BaseBdev1", 00:26:35.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.680 "is_configured": false, 00:26:35.680 "data_offset": 0, 00:26:35.680 "data_size": 0 00:26:35.680 }, 00:26:35.680 { 00:26:35.680 "name": null, 00:26:35.680 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:35.680 "is_configured": false, 00:26:35.680 "data_offset": 0, 00:26:35.680 "data_size": 65536 00:26:35.680 }, 00:26:35.680 { 00:26:35.680 "name": "BaseBdev3", 00:26:35.680 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:35.680 "is_configured": true, 00:26:35.680 "data_offset": 0, 00:26:35.680 "data_size": 65536 00:26:35.680 }, 00:26:35.680 { 00:26:35.680 "name": "BaseBdev4", 00:26:35.680 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:35.680 "is_configured": true, 00:26:35.680 "data_offset": 0, 00:26:35.680 "data_size": 65536 00:26:35.680 } 00:26:35.680 ] 00:26:35.680 }' 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.680 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.945 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:35.945 17:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.945 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 17:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.945 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.945 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:35.945 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:35.945 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.945 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.212 [2024-11-26 17:23:06.067972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.212 BaseBdev1 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.212 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.212 [ 00:26:36.212 { 00:26:36.212 "name": "BaseBdev1", 00:26:36.212 "aliases": [ 00:26:36.212 "ed06f8d7-c3ee-428f-a490-7a9d528c8cef" 00:26:36.212 ], 00:26:36.212 "product_name": "Malloc disk", 00:26:36.212 "block_size": 512, 00:26:36.212 "num_blocks": 65536, 00:26:36.212 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:36.212 "assigned_rate_limits": { 00:26:36.212 "rw_ios_per_sec": 0, 00:26:36.212 "rw_mbytes_per_sec": 0, 00:26:36.212 "r_mbytes_per_sec": 0, 00:26:36.212 "w_mbytes_per_sec": 0 00:26:36.212 }, 00:26:36.212 "claimed": true, 00:26:36.212 "claim_type": "exclusive_write", 00:26:36.212 "zoned": false, 00:26:36.212 "supported_io_types": { 00:26:36.212 "read": true, 00:26:36.212 "write": true, 00:26:36.212 "unmap": true, 00:26:36.212 "flush": true, 00:26:36.212 "reset": true, 00:26:36.212 "nvme_admin": false, 00:26:36.212 "nvme_io": false, 00:26:36.212 "nvme_io_md": false, 00:26:36.212 "write_zeroes": true, 00:26:36.212 "zcopy": true, 00:26:36.213 "get_zone_info": false, 00:26:36.213 "zone_management": false, 00:26:36.213 "zone_append": false, 00:26:36.213 "compare": false, 00:26:36.213 "compare_and_write": false, 00:26:36.213 "abort": true, 00:26:36.213 "seek_hole": false, 00:26:36.213 "seek_data": false, 00:26:36.213 "copy": true, 00:26:36.213 "nvme_iov_md": false 00:26:36.213 }, 00:26:36.213 "memory_domains": [ 00:26:36.213 { 00:26:36.213 "dma_device_id": "system", 00:26:36.213 "dma_device_type": 1 00:26:36.213 }, 00:26:36.213 { 00:26:36.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.213 "dma_device_type": 2 00:26:36.213 } 00:26:36.213 ], 00:26:36.213 "driver_specific": {} 00:26:36.213 } 00:26:36.213 ] 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.213 "name": "Existed_Raid", 00:26:36.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.213 "strip_size_kb": 64, 00:26:36.213 "state": "configuring", 00:26:36.213 "raid_level": "raid0", 00:26:36.213 "superblock": false, 00:26:36.213 "num_base_bdevs": 4, 00:26:36.213 "num_base_bdevs_discovered": 3, 00:26:36.213 "num_base_bdevs_operational": 4, 00:26:36.213 "base_bdevs_list": [ 00:26:36.213 { 00:26:36.213 "name": "BaseBdev1", 00:26:36.213 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:36.213 "is_configured": true, 00:26:36.213 "data_offset": 0, 00:26:36.213 "data_size": 65536 00:26:36.213 }, 00:26:36.213 { 00:26:36.213 "name": null, 00:26:36.213 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:36.213 "is_configured": false, 00:26:36.213 "data_offset": 0, 00:26:36.213 "data_size": 65536 00:26:36.213 }, 00:26:36.213 { 00:26:36.213 "name": "BaseBdev3", 00:26:36.213 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:36.213 "is_configured": true, 00:26:36.213 "data_offset": 0, 00:26:36.213 "data_size": 65536 00:26:36.213 }, 00:26:36.213 { 00:26:36.213 "name": "BaseBdev4", 00:26:36.213 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:36.213 "is_configured": true, 00:26:36.213 "data_offset": 0, 00:26:36.213 "data_size": 65536 00:26:36.213 } 00:26:36.213 ] 00:26:36.213 }' 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.213 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.475 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.475 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.475 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.475 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:36.475 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.734 [2024-11-26 17:23:06.591436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.734 "name": "Existed_Raid", 00:26:36.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.734 "strip_size_kb": 64, 00:26:36.734 "state": "configuring", 00:26:36.734 "raid_level": "raid0", 00:26:36.734 "superblock": false, 00:26:36.734 "num_base_bdevs": 4, 00:26:36.734 "num_base_bdevs_discovered": 2, 00:26:36.734 "num_base_bdevs_operational": 4, 00:26:36.734 "base_bdevs_list": [ 00:26:36.734 { 00:26:36.734 "name": "BaseBdev1", 00:26:36.734 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:36.734 "is_configured": true, 00:26:36.734 "data_offset": 0, 00:26:36.734 "data_size": 65536 00:26:36.734 }, 00:26:36.734 { 00:26:36.734 "name": null, 00:26:36.734 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:36.734 "is_configured": false, 00:26:36.734 "data_offset": 0, 00:26:36.734 "data_size": 65536 00:26:36.734 }, 00:26:36.734 { 00:26:36.734 "name": null, 00:26:36.734 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:36.734 "is_configured": false, 00:26:36.734 "data_offset": 0, 00:26:36.734 "data_size": 65536 00:26:36.734 }, 00:26:36.734 { 00:26:36.734 "name": "BaseBdev4", 00:26:36.734 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:36.734 "is_configured": true, 00:26:36.734 "data_offset": 0, 00:26:36.734 "data_size": 65536 00:26:36.734 } 00:26:36.734 ] 00:26:36.734 }' 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.734 17:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.994 [2024-11-26 17:23:07.074774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.994 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.254 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.254 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.254 "name": "Existed_Raid", 00:26:37.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.254 "strip_size_kb": 64, 00:26:37.254 "state": "configuring", 00:26:37.254 "raid_level": "raid0", 00:26:37.254 "superblock": false, 00:26:37.254 "num_base_bdevs": 4, 00:26:37.254 "num_base_bdevs_discovered": 3, 00:26:37.254 "num_base_bdevs_operational": 4, 00:26:37.254 "base_bdevs_list": [ 00:26:37.254 { 00:26:37.254 "name": "BaseBdev1", 00:26:37.254 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:37.254 "is_configured": true, 00:26:37.254 "data_offset": 0, 00:26:37.254 "data_size": 65536 00:26:37.254 }, 00:26:37.254 { 00:26:37.254 "name": null, 00:26:37.254 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:37.254 "is_configured": false, 00:26:37.254 "data_offset": 0, 00:26:37.254 "data_size": 65536 00:26:37.254 }, 00:26:37.254 { 00:26:37.254 "name": "BaseBdev3", 00:26:37.254 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:37.254 "is_configured": true, 00:26:37.254 "data_offset": 0, 00:26:37.254 "data_size": 65536 00:26:37.254 }, 00:26:37.254 { 00:26:37.254 "name": "BaseBdev4", 00:26:37.254 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:37.254 "is_configured": true, 00:26:37.254 "data_offset": 0, 00:26:37.254 "data_size": 65536 00:26:37.254 } 00:26:37.254 ] 00:26:37.254 }' 00:26:37.254 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.254 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:37.529 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.530 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.530 [2024-11-26 17:23:07.606083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.793 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.794 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.794 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.794 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.794 "name": "Existed_Raid", 00:26:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.794 "strip_size_kb": 64, 00:26:37.794 "state": "configuring", 00:26:37.794 "raid_level": "raid0", 00:26:37.794 "superblock": false, 00:26:37.794 "num_base_bdevs": 4, 00:26:37.794 "num_base_bdevs_discovered": 2, 00:26:37.794 "num_base_bdevs_operational": 4, 00:26:37.794 "base_bdevs_list": [ 00:26:37.794 { 00:26:37.794 "name": null, 00:26:37.794 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:37.794 "is_configured": false, 00:26:37.794 "data_offset": 0, 00:26:37.794 "data_size": 65536 00:26:37.794 }, 00:26:37.794 { 00:26:37.794 "name": null, 00:26:37.794 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:37.794 "is_configured": false, 00:26:37.794 "data_offset": 0, 00:26:37.794 "data_size": 65536 00:26:37.794 }, 00:26:37.794 { 00:26:37.794 "name": "BaseBdev3", 00:26:37.794 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:37.794 "is_configured": true, 00:26:37.794 "data_offset": 0, 00:26:37.794 "data_size": 65536 00:26:37.794 }, 00:26:37.794 { 00:26:37.794 "name": "BaseBdev4", 00:26:37.794 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:37.794 "is_configured": true, 00:26:37.794 "data_offset": 0, 00:26:37.794 "data_size": 65536 00:26:37.794 } 00:26:37.794 ] 00:26:37.794 }' 00:26:37.794 17:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.794 17:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.054 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.054 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.054 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.054 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.313 [2024-11-26 17:23:08.200875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.313 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.313 "name": "Existed_Raid", 00:26:38.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.313 "strip_size_kb": 64, 00:26:38.313 "state": "configuring", 00:26:38.313 "raid_level": "raid0", 00:26:38.313 "superblock": false, 00:26:38.313 "num_base_bdevs": 4, 00:26:38.313 "num_base_bdevs_discovered": 3, 00:26:38.313 "num_base_bdevs_operational": 4, 00:26:38.313 "base_bdevs_list": [ 00:26:38.313 { 00:26:38.313 "name": null, 00:26:38.313 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:38.313 "is_configured": false, 00:26:38.313 "data_offset": 0, 00:26:38.313 "data_size": 65536 00:26:38.313 }, 00:26:38.313 { 00:26:38.313 "name": "BaseBdev2", 00:26:38.314 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:38.314 "is_configured": true, 00:26:38.314 "data_offset": 0, 00:26:38.314 "data_size": 65536 00:26:38.314 }, 00:26:38.314 { 00:26:38.314 "name": "BaseBdev3", 00:26:38.314 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:38.314 "is_configured": true, 00:26:38.314 "data_offset": 0, 00:26:38.314 "data_size": 65536 00:26:38.314 }, 00:26:38.314 { 00:26:38.314 "name": "BaseBdev4", 00:26:38.314 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:38.314 "is_configured": true, 00:26:38.314 "data_offset": 0, 00:26:38.314 "data_size": 65536 00:26:38.314 } 00:26:38.314 ] 00:26:38.314 }' 00:26:38.314 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.314 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.573 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:38.573 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.573 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.573 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed06f8d7-c3ee-428f-a490-7a9d528c8cef 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 [2024-11-26 17:23:08.797980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:38.912 [2024-11-26 17:23:08.798041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:38.912 [2024-11-26 17:23:08.798052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:38.912 [2024-11-26 17:23:08.798377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:38.912 [2024-11-26 17:23:08.798562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:38.912 [2024-11-26 17:23:08.798598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:38.912 [2024-11-26 17:23:08.798877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.912 NewBaseBdev 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 [ 00:26:38.912 { 00:26:38.912 "name": "NewBaseBdev", 00:26:38.912 "aliases": [ 00:26:38.912 "ed06f8d7-c3ee-428f-a490-7a9d528c8cef" 00:26:38.912 ], 00:26:38.912 "product_name": "Malloc disk", 00:26:38.912 "block_size": 512, 00:26:38.912 "num_blocks": 65536, 00:26:38.912 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:38.912 "assigned_rate_limits": { 00:26:38.912 "rw_ios_per_sec": 0, 00:26:38.912 "rw_mbytes_per_sec": 0, 00:26:38.912 "r_mbytes_per_sec": 0, 00:26:38.912 "w_mbytes_per_sec": 0 00:26:38.912 }, 00:26:38.912 "claimed": true, 00:26:38.912 "claim_type": "exclusive_write", 00:26:38.912 "zoned": false, 00:26:38.912 "supported_io_types": { 00:26:38.912 "read": true, 00:26:38.912 "write": true, 00:26:38.912 "unmap": true, 00:26:38.912 "flush": true, 00:26:38.912 "reset": true, 00:26:38.912 "nvme_admin": false, 00:26:38.912 "nvme_io": false, 00:26:38.912 "nvme_io_md": false, 00:26:38.912 "write_zeroes": true, 00:26:38.912 "zcopy": true, 00:26:38.912 "get_zone_info": false, 00:26:38.912 "zone_management": false, 00:26:38.912 "zone_append": false, 00:26:38.912 "compare": false, 00:26:38.912 "compare_and_write": false, 00:26:38.912 "abort": true, 00:26:38.912 "seek_hole": false, 00:26:38.912 "seek_data": false, 00:26:38.912 "copy": true, 00:26:38.912 "nvme_iov_md": false 00:26:38.912 }, 00:26:38.912 "memory_domains": [ 00:26:38.912 { 00:26:38.912 "dma_device_id": "system", 00:26:38.912 "dma_device_type": 1 00:26:38.912 }, 00:26:38.912 { 00:26:38.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.912 "dma_device_type": 2 00:26:38.912 } 00:26:38.912 ], 00:26:38.912 "driver_specific": {} 00:26:38.912 } 00:26:38.912 ] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.912 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.913 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.913 "name": "Existed_Raid", 00:26:38.913 "uuid": "bd84178f-69e3-4fdd-b2e5-7e39316d8cb1", 00:26:38.913 "strip_size_kb": 64, 00:26:38.913 "state": "online", 00:26:38.913 "raid_level": "raid0", 00:26:38.913 "superblock": false, 00:26:38.913 "num_base_bdevs": 4, 00:26:38.913 "num_base_bdevs_discovered": 4, 00:26:38.913 "num_base_bdevs_operational": 4, 00:26:38.913 "base_bdevs_list": [ 00:26:38.913 { 00:26:38.913 "name": "NewBaseBdev", 00:26:38.913 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:38.913 "is_configured": true, 00:26:38.913 "data_offset": 0, 00:26:38.913 "data_size": 65536 00:26:38.913 }, 00:26:38.913 { 00:26:38.913 "name": "BaseBdev2", 00:26:38.913 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:38.913 "is_configured": true, 00:26:38.913 "data_offset": 0, 00:26:38.913 "data_size": 65536 00:26:38.913 }, 00:26:38.913 { 00:26:38.913 "name": "BaseBdev3", 00:26:38.913 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:38.913 "is_configured": true, 00:26:38.913 "data_offset": 0, 00:26:38.913 "data_size": 65536 00:26:38.913 }, 00:26:38.913 { 00:26:38.913 "name": "BaseBdev4", 00:26:38.913 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:38.913 "is_configured": true, 00:26:38.913 "data_offset": 0, 00:26:38.913 "data_size": 65536 00:26:38.913 } 00:26:38.913 ] 00:26:38.913 }' 00:26:38.913 17:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.913 17:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.479 [2024-11-26 17:23:09.334104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.479 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:39.479 "name": "Existed_Raid", 00:26:39.479 "aliases": [ 00:26:39.479 "bd84178f-69e3-4fdd-b2e5-7e39316d8cb1" 00:26:39.479 ], 00:26:39.479 "product_name": "Raid Volume", 00:26:39.479 "block_size": 512, 00:26:39.479 "num_blocks": 262144, 00:26:39.479 "uuid": "bd84178f-69e3-4fdd-b2e5-7e39316d8cb1", 00:26:39.479 "assigned_rate_limits": { 00:26:39.479 "rw_ios_per_sec": 0, 00:26:39.479 "rw_mbytes_per_sec": 0, 00:26:39.479 "r_mbytes_per_sec": 0, 00:26:39.479 "w_mbytes_per_sec": 0 00:26:39.479 }, 00:26:39.479 "claimed": false, 00:26:39.479 "zoned": false, 00:26:39.479 "supported_io_types": { 00:26:39.479 "read": true, 00:26:39.479 "write": true, 00:26:39.479 "unmap": true, 00:26:39.479 "flush": true, 00:26:39.479 "reset": true, 00:26:39.479 "nvme_admin": false, 00:26:39.479 "nvme_io": false, 00:26:39.479 "nvme_io_md": false, 00:26:39.479 "write_zeroes": true, 00:26:39.479 "zcopy": false, 00:26:39.479 "get_zone_info": false, 00:26:39.479 "zone_management": false, 00:26:39.479 "zone_append": false, 00:26:39.479 "compare": false, 00:26:39.479 "compare_and_write": false, 00:26:39.479 "abort": false, 00:26:39.479 "seek_hole": false, 00:26:39.479 "seek_data": false, 00:26:39.479 "copy": false, 00:26:39.479 "nvme_iov_md": false 00:26:39.479 }, 00:26:39.479 "memory_domains": [ 00:26:39.479 { 00:26:39.479 "dma_device_id": "system", 00:26:39.479 "dma_device_type": 1 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.479 "dma_device_type": 2 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "system", 00:26:39.479 "dma_device_type": 1 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.479 "dma_device_type": 2 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "system", 00:26:39.479 "dma_device_type": 1 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.479 "dma_device_type": 2 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "system", 00:26:39.479 "dma_device_type": 1 00:26:39.479 }, 00:26:39.479 { 00:26:39.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.479 "dma_device_type": 2 00:26:39.479 } 00:26:39.479 ], 00:26:39.479 "driver_specific": { 00:26:39.479 "raid": { 00:26:39.479 "uuid": "bd84178f-69e3-4fdd-b2e5-7e39316d8cb1", 00:26:39.479 "strip_size_kb": 64, 00:26:39.479 "state": "online", 00:26:39.479 "raid_level": "raid0", 00:26:39.479 "superblock": false, 00:26:39.479 "num_base_bdevs": 4, 00:26:39.479 "num_base_bdevs_discovered": 4, 00:26:39.479 "num_base_bdevs_operational": 4, 00:26:39.479 "base_bdevs_list": [ 00:26:39.479 { 00:26:39.479 "name": "NewBaseBdev", 00:26:39.479 "uuid": "ed06f8d7-c3ee-428f-a490-7a9d528c8cef", 00:26:39.479 "is_configured": true, 00:26:39.479 "data_offset": 0, 00:26:39.479 "data_size": 65536 00:26:39.479 }, 00:26:39.479 { 00:26:39.480 "name": "BaseBdev2", 00:26:39.480 "uuid": "e62e4234-d9a0-4ada-8580-b21a1958578a", 00:26:39.480 "is_configured": true, 00:26:39.480 "data_offset": 0, 00:26:39.480 "data_size": 65536 00:26:39.480 }, 00:26:39.480 { 00:26:39.480 "name": "BaseBdev3", 00:26:39.480 "uuid": "a109fb52-da52-42c8-b206-90c03e484395", 00:26:39.480 "is_configured": true, 00:26:39.480 "data_offset": 0, 00:26:39.480 "data_size": 65536 00:26:39.480 }, 00:26:39.480 { 00:26:39.480 "name": "BaseBdev4", 00:26:39.480 "uuid": "a88a8abb-6a5f-416f-bab0-90b8a53689e0", 00:26:39.480 "is_configured": true, 00:26:39.480 "data_offset": 0, 00:26:39.480 "data_size": 65536 00:26:39.480 } 00:26:39.480 ] 00:26:39.480 } 00:26:39.480 } 00:26:39.480 }' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:39.480 BaseBdev2 00:26:39.480 BaseBdev3 00:26:39.480 BaseBdev4' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.480 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.739 [2024-11-26 17:23:09.625675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:39.739 [2024-11-26 17:23:09.625719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:39.739 [2024-11-26 17:23:09.625820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:39.739 [2024-11-26 17:23:09.625902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:39.739 [2024-11-26 17:23:09.625916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69474 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69474 ']' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69474 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69474 00:26:39.739 killing process with pid 69474 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69474' 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69474 00:26:39.739 [2024-11-26 17:23:09.675839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:39.739 17:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69474 00:26:40.305 [2024-11-26 17:23:10.112028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:41.681 00:26:41.681 real 0m12.181s 00:26:41.681 user 0m19.207s 00:26:41.681 sys 0m2.610s 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.681 ************************************ 00:26:41.681 END TEST raid_state_function_test 00:26:41.681 ************************************ 00:26:41.681 17:23:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:26:41.681 17:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:41.681 17:23:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.681 17:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:41.681 ************************************ 00:26:41.681 START TEST raid_state_function_test_sb 00:26:41.681 ************************************ 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:41.681 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:41.682 Process raid pid: 70154 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70154 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70154' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70154 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70154 ']' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.682 17:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.682 [2024-11-26 17:23:11.560717] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:41.682 [2024-11-26 17:23:11.560895] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.682 [2024-11-26 17:23:11.740141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.941 [2024-11-26 17:23:11.889508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.200 [2024-11-26 17:23:12.110895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:42.200 [2024-11-26 17:23:12.110954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.459 [2024-11-26 17:23:12.419765] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:42.459 [2024-11-26 17:23:12.419965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:42.459 [2024-11-26 17:23:12.420134] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:42.459 [2024-11-26 17:23:12.420181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:42.459 [2024-11-26 17:23:12.420261] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:42.459 [2024-11-26 17:23:12.420303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:42.459 [2024-11-26 17:23:12.420331] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:42.459 [2024-11-26 17:23:12.420406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.459 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:42.460 "name": "Existed_Raid", 00:26:42.460 "uuid": "d189192b-a8df-4298-a704-53f5d89a2390", 00:26:42.460 "strip_size_kb": 64, 00:26:42.460 "state": "configuring", 00:26:42.460 "raid_level": "raid0", 00:26:42.460 "superblock": true, 00:26:42.460 "num_base_bdevs": 4, 00:26:42.460 "num_base_bdevs_discovered": 0, 00:26:42.460 "num_base_bdevs_operational": 4, 00:26:42.460 "base_bdevs_list": [ 00:26:42.460 { 00:26:42.460 "name": "BaseBdev1", 00:26:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.460 "is_configured": false, 00:26:42.460 "data_offset": 0, 00:26:42.460 "data_size": 0 00:26:42.460 }, 00:26:42.460 { 00:26:42.460 "name": "BaseBdev2", 00:26:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.460 "is_configured": false, 00:26:42.460 "data_offset": 0, 00:26:42.460 "data_size": 0 00:26:42.460 }, 00:26:42.460 { 00:26:42.460 "name": "BaseBdev3", 00:26:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.460 "is_configured": false, 00:26:42.460 "data_offset": 0, 00:26:42.460 "data_size": 0 00:26:42.460 }, 00:26:42.460 { 00:26:42.460 "name": "BaseBdev4", 00:26:42.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.460 "is_configured": false, 00:26:42.460 "data_offset": 0, 00:26:42.460 "data_size": 0 00:26:42.460 } 00:26:42.460 ] 00:26:42.460 }' 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:42.460 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 [2024-11-26 17:23:12.843173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:43.028 [2024-11-26 17:23:12.843230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 [2024-11-26 17:23:12.855180] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:43.028 [2024-11-26 17:23:12.855425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:43.028 [2024-11-26 17:23:12.855452] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:43.028 [2024-11-26 17:23:12.855466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:43.028 [2024-11-26 17:23:12.855475] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:43.028 [2024-11-26 17:23:12.855489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:43.028 [2024-11-26 17:23:12.855497] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:43.028 [2024-11-26 17:23:12.855509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 [2024-11-26 17:23:12.906559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:43.028 BaseBdev1 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 [ 00:26:43.028 { 00:26:43.028 "name": "BaseBdev1", 00:26:43.028 "aliases": [ 00:26:43.028 "4b5f3182-48f8-4113-bb41-0ad343b5fe69" 00:26:43.028 ], 00:26:43.028 "product_name": "Malloc disk", 00:26:43.028 "block_size": 512, 00:26:43.028 "num_blocks": 65536, 00:26:43.028 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:43.028 "assigned_rate_limits": { 00:26:43.028 "rw_ios_per_sec": 0, 00:26:43.028 "rw_mbytes_per_sec": 0, 00:26:43.028 "r_mbytes_per_sec": 0, 00:26:43.028 "w_mbytes_per_sec": 0 00:26:43.028 }, 00:26:43.028 "claimed": true, 00:26:43.028 "claim_type": "exclusive_write", 00:26:43.028 "zoned": false, 00:26:43.028 "supported_io_types": { 00:26:43.028 "read": true, 00:26:43.028 "write": true, 00:26:43.028 "unmap": true, 00:26:43.028 "flush": true, 00:26:43.028 "reset": true, 00:26:43.028 "nvme_admin": false, 00:26:43.028 "nvme_io": false, 00:26:43.028 "nvme_io_md": false, 00:26:43.028 "write_zeroes": true, 00:26:43.028 "zcopy": true, 00:26:43.028 "get_zone_info": false, 00:26:43.028 "zone_management": false, 00:26:43.028 "zone_append": false, 00:26:43.028 "compare": false, 00:26:43.028 "compare_and_write": false, 00:26:43.028 "abort": true, 00:26:43.028 "seek_hole": false, 00:26:43.028 "seek_data": false, 00:26:43.028 "copy": true, 00:26:43.028 "nvme_iov_md": false 00:26:43.028 }, 00:26:43.028 "memory_domains": [ 00:26:43.028 { 00:26:43.028 "dma_device_id": "system", 00:26:43.028 "dma_device_type": 1 00:26:43.028 }, 00:26:43.028 { 00:26:43.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.028 "dma_device_type": 2 00:26:43.028 } 00:26:43.028 ], 00:26:43.028 "driver_specific": {} 00:26:43.028 } 00:26:43.028 ] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.028 17:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.028 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:43.029 "name": "Existed_Raid", 00:26:43.029 "uuid": "69780475-b2cc-411c-bed4-9c340bc10eb2", 00:26:43.029 "strip_size_kb": 64, 00:26:43.029 "state": "configuring", 00:26:43.029 "raid_level": "raid0", 00:26:43.029 "superblock": true, 00:26:43.029 "num_base_bdevs": 4, 00:26:43.029 "num_base_bdevs_discovered": 1, 00:26:43.029 "num_base_bdevs_operational": 4, 00:26:43.029 "base_bdevs_list": [ 00:26:43.029 { 00:26:43.029 "name": "BaseBdev1", 00:26:43.029 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:43.029 "is_configured": true, 00:26:43.029 "data_offset": 2048, 00:26:43.029 "data_size": 63488 00:26:43.029 }, 00:26:43.029 { 00:26:43.029 "name": "BaseBdev2", 00:26:43.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.029 "is_configured": false, 00:26:43.029 "data_offset": 0, 00:26:43.029 "data_size": 0 00:26:43.029 }, 00:26:43.029 { 00:26:43.029 "name": "BaseBdev3", 00:26:43.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.029 "is_configured": false, 00:26:43.029 "data_offset": 0, 00:26:43.029 "data_size": 0 00:26:43.029 }, 00:26:43.029 { 00:26:43.029 "name": "BaseBdev4", 00:26:43.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.029 "is_configured": false, 00:26:43.029 "data_offset": 0, 00:26:43.029 "data_size": 0 00:26:43.029 } 00:26:43.029 ] 00:26:43.029 }' 00:26:43.029 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:43.029 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.287 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:43.287 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.287 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.287 [2024-11-26 17:23:13.397923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:43.287 [2024-11-26 17:23:13.398000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.547 [2024-11-26 17:23:13.410022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:43.547 [2024-11-26 17:23:13.412548] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:43.547 [2024-11-26 17:23:13.412716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:43.547 [2024-11-26 17:23:13.412832] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:43.547 [2024-11-26 17:23:13.412883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:43.547 [2024-11-26 17:23:13.412913] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:43.547 [2024-11-26 17:23:13.412946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:43.547 "name": "Existed_Raid", 00:26:43.547 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:43.547 "strip_size_kb": 64, 00:26:43.547 "state": "configuring", 00:26:43.547 "raid_level": "raid0", 00:26:43.547 "superblock": true, 00:26:43.547 "num_base_bdevs": 4, 00:26:43.547 "num_base_bdevs_discovered": 1, 00:26:43.547 "num_base_bdevs_operational": 4, 00:26:43.547 "base_bdevs_list": [ 00:26:43.547 { 00:26:43.547 "name": "BaseBdev1", 00:26:43.547 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:43.547 "is_configured": true, 00:26:43.547 "data_offset": 2048, 00:26:43.547 "data_size": 63488 00:26:43.547 }, 00:26:43.547 { 00:26:43.547 "name": "BaseBdev2", 00:26:43.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.547 "is_configured": false, 00:26:43.547 "data_offset": 0, 00:26:43.547 "data_size": 0 00:26:43.547 }, 00:26:43.547 { 00:26:43.547 "name": "BaseBdev3", 00:26:43.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.547 "is_configured": false, 00:26:43.547 "data_offset": 0, 00:26:43.547 "data_size": 0 00:26:43.547 }, 00:26:43.547 { 00:26:43.547 "name": "BaseBdev4", 00:26:43.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.547 "is_configured": false, 00:26:43.547 "data_offset": 0, 00:26:43.547 "data_size": 0 00:26:43.547 } 00:26:43.547 ] 00:26:43.547 }' 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:43.547 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.806 [2024-11-26 17:23:13.901607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:43.806 BaseBdev2 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.806 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.064 [ 00:26:44.064 { 00:26:44.064 "name": "BaseBdev2", 00:26:44.064 "aliases": [ 00:26:44.064 "817ce672-782c-45ea-b5cb-1c79875cd40b" 00:26:44.064 ], 00:26:44.064 "product_name": "Malloc disk", 00:26:44.064 "block_size": 512, 00:26:44.064 "num_blocks": 65536, 00:26:44.064 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:44.064 "assigned_rate_limits": { 00:26:44.064 "rw_ios_per_sec": 0, 00:26:44.064 "rw_mbytes_per_sec": 0, 00:26:44.064 "r_mbytes_per_sec": 0, 00:26:44.064 "w_mbytes_per_sec": 0 00:26:44.064 }, 00:26:44.064 "claimed": true, 00:26:44.064 "claim_type": "exclusive_write", 00:26:44.064 "zoned": false, 00:26:44.064 "supported_io_types": { 00:26:44.064 "read": true, 00:26:44.064 "write": true, 00:26:44.064 "unmap": true, 00:26:44.064 "flush": true, 00:26:44.064 "reset": true, 00:26:44.064 "nvme_admin": false, 00:26:44.064 "nvme_io": false, 00:26:44.064 "nvme_io_md": false, 00:26:44.064 "write_zeroes": true, 00:26:44.064 "zcopy": true, 00:26:44.064 "get_zone_info": false, 00:26:44.064 "zone_management": false, 00:26:44.064 "zone_append": false, 00:26:44.064 "compare": false, 00:26:44.064 "compare_and_write": false, 00:26:44.064 "abort": true, 00:26:44.064 "seek_hole": false, 00:26:44.064 "seek_data": false, 00:26:44.064 "copy": true, 00:26:44.064 "nvme_iov_md": false 00:26:44.064 }, 00:26:44.064 "memory_domains": [ 00:26:44.064 { 00:26:44.064 "dma_device_id": "system", 00:26:44.064 "dma_device_type": 1 00:26:44.064 }, 00:26:44.064 { 00:26:44.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.064 "dma_device_type": 2 00:26:44.064 } 00:26:44.064 ], 00:26:44.064 "driver_specific": {} 00:26:44.064 } 00:26:44.064 ] 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:44.064 "name": "Existed_Raid", 00:26:44.064 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:44.064 "strip_size_kb": 64, 00:26:44.064 "state": "configuring", 00:26:44.064 "raid_level": "raid0", 00:26:44.064 "superblock": true, 00:26:44.064 "num_base_bdevs": 4, 00:26:44.064 "num_base_bdevs_discovered": 2, 00:26:44.064 "num_base_bdevs_operational": 4, 00:26:44.064 "base_bdevs_list": [ 00:26:44.064 { 00:26:44.064 "name": "BaseBdev1", 00:26:44.064 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:44.064 "is_configured": true, 00:26:44.064 "data_offset": 2048, 00:26:44.064 "data_size": 63488 00:26:44.064 }, 00:26:44.064 { 00:26:44.064 "name": "BaseBdev2", 00:26:44.064 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:44.064 "is_configured": true, 00:26:44.064 "data_offset": 2048, 00:26:44.064 "data_size": 63488 00:26:44.064 }, 00:26:44.064 { 00:26:44.064 "name": "BaseBdev3", 00:26:44.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.064 "is_configured": false, 00:26:44.064 "data_offset": 0, 00:26:44.064 "data_size": 0 00:26:44.064 }, 00:26:44.064 { 00:26:44.064 "name": "BaseBdev4", 00:26:44.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.064 "is_configured": false, 00:26:44.064 "data_offset": 0, 00:26:44.064 "data_size": 0 00:26:44.064 } 00:26:44.064 ] 00:26:44.064 }' 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:44.064 17:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.322 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:44.322 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.322 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.322 [2024-11-26 17:23:14.418882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:44.322 BaseBdev3 00:26:44.322 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.322 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.323 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.582 [ 00:26:44.582 { 00:26:44.582 "name": "BaseBdev3", 00:26:44.582 "aliases": [ 00:26:44.582 "3b2fcad8-2796-47f9-b152-5b541d934d0c" 00:26:44.582 ], 00:26:44.582 "product_name": "Malloc disk", 00:26:44.582 "block_size": 512, 00:26:44.582 "num_blocks": 65536, 00:26:44.582 "uuid": "3b2fcad8-2796-47f9-b152-5b541d934d0c", 00:26:44.582 "assigned_rate_limits": { 00:26:44.582 "rw_ios_per_sec": 0, 00:26:44.582 "rw_mbytes_per_sec": 0, 00:26:44.582 "r_mbytes_per_sec": 0, 00:26:44.582 "w_mbytes_per_sec": 0 00:26:44.582 }, 00:26:44.582 "claimed": true, 00:26:44.582 "claim_type": "exclusive_write", 00:26:44.582 "zoned": false, 00:26:44.582 "supported_io_types": { 00:26:44.582 "read": true, 00:26:44.582 "write": true, 00:26:44.582 "unmap": true, 00:26:44.582 "flush": true, 00:26:44.582 "reset": true, 00:26:44.582 "nvme_admin": false, 00:26:44.582 "nvme_io": false, 00:26:44.582 "nvme_io_md": false, 00:26:44.582 "write_zeroes": true, 00:26:44.582 "zcopy": true, 00:26:44.582 "get_zone_info": false, 00:26:44.582 "zone_management": false, 00:26:44.582 "zone_append": false, 00:26:44.582 "compare": false, 00:26:44.582 "compare_and_write": false, 00:26:44.582 "abort": true, 00:26:44.582 "seek_hole": false, 00:26:44.582 "seek_data": false, 00:26:44.582 "copy": true, 00:26:44.582 "nvme_iov_md": false 00:26:44.582 }, 00:26:44.582 "memory_domains": [ 00:26:44.582 { 00:26:44.582 "dma_device_id": "system", 00:26:44.582 "dma_device_type": 1 00:26:44.582 }, 00:26:44.582 { 00:26:44.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.582 "dma_device_type": 2 00:26:44.582 } 00:26:44.582 ], 00:26:44.582 "driver_specific": {} 00:26:44.582 } 00:26:44.582 ] 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:44.582 "name": "Existed_Raid", 00:26:44.582 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:44.582 "strip_size_kb": 64, 00:26:44.582 "state": "configuring", 00:26:44.582 "raid_level": "raid0", 00:26:44.582 "superblock": true, 00:26:44.582 "num_base_bdevs": 4, 00:26:44.582 "num_base_bdevs_discovered": 3, 00:26:44.582 "num_base_bdevs_operational": 4, 00:26:44.582 "base_bdevs_list": [ 00:26:44.582 { 00:26:44.582 "name": "BaseBdev1", 00:26:44.582 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:44.582 "is_configured": true, 00:26:44.582 "data_offset": 2048, 00:26:44.582 "data_size": 63488 00:26:44.582 }, 00:26:44.582 { 00:26:44.582 "name": "BaseBdev2", 00:26:44.582 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:44.582 "is_configured": true, 00:26:44.582 "data_offset": 2048, 00:26:44.582 "data_size": 63488 00:26:44.582 }, 00:26:44.582 { 00:26:44.582 "name": "BaseBdev3", 00:26:44.582 "uuid": "3b2fcad8-2796-47f9-b152-5b541d934d0c", 00:26:44.582 "is_configured": true, 00:26:44.582 "data_offset": 2048, 00:26:44.582 "data_size": 63488 00:26:44.582 }, 00:26:44.582 { 00:26:44.582 "name": "BaseBdev4", 00:26:44.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.582 "is_configured": false, 00:26:44.582 "data_offset": 0, 00:26:44.582 "data_size": 0 00:26:44.582 } 00:26:44.582 ] 00:26:44.582 }' 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:44.582 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 [2024-11-26 17:23:14.910305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:44.842 [2024-11-26 17:23:14.910789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:44.842 [2024-11-26 17:23:14.910913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:44.842 [2024-11-26 17:23:14.911265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:44.842 BaseBdev4 00:26:44.842 [2024-11-26 17:23:14.911453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:44.842 [2024-11-26 17:23:14.911469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:44.842 [2024-11-26 17:23:14.911648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.842 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 [ 00:26:44.842 { 00:26:44.842 "name": "BaseBdev4", 00:26:44.842 "aliases": [ 00:26:44.842 "f1124859-0be7-4dbf-9883-244553b98fc6" 00:26:44.842 ], 00:26:44.842 "product_name": "Malloc disk", 00:26:44.842 "block_size": 512, 00:26:44.842 "num_blocks": 65536, 00:26:44.842 "uuid": "f1124859-0be7-4dbf-9883-244553b98fc6", 00:26:44.842 "assigned_rate_limits": { 00:26:44.842 "rw_ios_per_sec": 0, 00:26:44.842 "rw_mbytes_per_sec": 0, 00:26:44.842 "r_mbytes_per_sec": 0, 00:26:44.842 "w_mbytes_per_sec": 0 00:26:44.842 }, 00:26:44.842 "claimed": true, 00:26:44.842 "claim_type": "exclusive_write", 00:26:44.842 "zoned": false, 00:26:44.842 "supported_io_types": { 00:26:44.842 "read": true, 00:26:44.842 "write": true, 00:26:44.842 "unmap": true, 00:26:44.842 "flush": true, 00:26:44.842 "reset": true, 00:26:44.842 "nvme_admin": false, 00:26:44.842 "nvme_io": false, 00:26:44.842 "nvme_io_md": false, 00:26:44.842 "write_zeroes": true, 00:26:44.842 "zcopy": true, 00:26:44.842 "get_zone_info": false, 00:26:44.842 "zone_management": false, 00:26:44.842 "zone_append": false, 00:26:45.126 "compare": false, 00:26:45.126 "compare_and_write": false, 00:26:45.126 "abort": true, 00:26:45.126 "seek_hole": false, 00:26:45.126 "seek_data": false, 00:26:45.126 "copy": true, 00:26:45.126 "nvme_iov_md": false 00:26:45.126 }, 00:26:45.126 "memory_domains": [ 00:26:45.126 { 00:26:45.126 "dma_device_id": "system", 00:26:45.126 "dma_device_type": 1 00:26:45.126 }, 00:26:45.126 { 00:26:45.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.126 "dma_device_type": 2 00:26:45.126 } 00:26:45.126 ], 00:26:45.126 "driver_specific": {} 00:26:45.126 } 00:26:45.126 ] 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.126 17:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.126 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:45.126 "name": "Existed_Raid", 00:26:45.126 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:45.126 "strip_size_kb": 64, 00:26:45.126 "state": "online", 00:26:45.126 "raid_level": "raid0", 00:26:45.126 "superblock": true, 00:26:45.126 "num_base_bdevs": 4, 00:26:45.126 "num_base_bdevs_discovered": 4, 00:26:45.126 "num_base_bdevs_operational": 4, 00:26:45.126 "base_bdevs_list": [ 00:26:45.126 { 00:26:45.126 "name": "BaseBdev1", 00:26:45.126 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:45.126 "is_configured": true, 00:26:45.126 "data_offset": 2048, 00:26:45.126 "data_size": 63488 00:26:45.126 }, 00:26:45.126 { 00:26:45.126 "name": "BaseBdev2", 00:26:45.126 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:45.126 "is_configured": true, 00:26:45.126 "data_offset": 2048, 00:26:45.126 "data_size": 63488 00:26:45.126 }, 00:26:45.126 { 00:26:45.126 "name": "BaseBdev3", 00:26:45.126 "uuid": "3b2fcad8-2796-47f9-b152-5b541d934d0c", 00:26:45.126 "is_configured": true, 00:26:45.126 "data_offset": 2048, 00:26:45.126 "data_size": 63488 00:26:45.126 }, 00:26:45.126 { 00:26:45.126 "name": "BaseBdev4", 00:26:45.126 "uuid": "f1124859-0be7-4dbf-9883-244553b98fc6", 00:26:45.126 "is_configured": true, 00:26:45.126 "data_offset": 2048, 00:26:45.126 "data_size": 63488 00:26:45.126 } 00:26:45.126 ] 00:26:45.126 }' 00:26:45.126 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:45.126 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.396 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.396 [2024-11-26 17:23:15.410037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.397 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.397 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:45.397 "name": "Existed_Raid", 00:26:45.397 "aliases": [ 00:26:45.397 "6d1aa663-bac5-40c7-badf-a4d552a8c368" 00:26:45.397 ], 00:26:45.397 "product_name": "Raid Volume", 00:26:45.397 "block_size": 512, 00:26:45.397 "num_blocks": 253952, 00:26:45.397 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:45.397 "assigned_rate_limits": { 00:26:45.397 "rw_ios_per_sec": 0, 00:26:45.397 "rw_mbytes_per_sec": 0, 00:26:45.397 "r_mbytes_per_sec": 0, 00:26:45.397 "w_mbytes_per_sec": 0 00:26:45.397 }, 00:26:45.397 "claimed": false, 00:26:45.397 "zoned": false, 00:26:45.397 "supported_io_types": { 00:26:45.397 "read": true, 00:26:45.397 "write": true, 00:26:45.397 "unmap": true, 00:26:45.397 "flush": true, 00:26:45.397 "reset": true, 00:26:45.397 "nvme_admin": false, 00:26:45.397 "nvme_io": false, 00:26:45.397 "nvme_io_md": false, 00:26:45.397 "write_zeroes": true, 00:26:45.397 "zcopy": false, 00:26:45.397 "get_zone_info": false, 00:26:45.397 "zone_management": false, 00:26:45.397 "zone_append": false, 00:26:45.397 "compare": false, 00:26:45.397 "compare_and_write": false, 00:26:45.397 "abort": false, 00:26:45.397 "seek_hole": false, 00:26:45.397 "seek_data": false, 00:26:45.397 "copy": false, 00:26:45.397 "nvme_iov_md": false 00:26:45.397 }, 00:26:45.397 "memory_domains": [ 00:26:45.397 { 00:26:45.397 "dma_device_id": "system", 00:26:45.397 "dma_device_type": 1 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.397 "dma_device_type": 2 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "system", 00:26:45.397 "dma_device_type": 1 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.397 "dma_device_type": 2 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "system", 00:26:45.397 "dma_device_type": 1 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.397 "dma_device_type": 2 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "system", 00:26:45.397 "dma_device_type": 1 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.397 "dma_device_type": 2 00:26:45.397 } 00:26:45.397 ], 00:26:45.397 "driver_specific": { 00:26:45.397 "raid": { 00:26:45.397 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:45.397 "strip_size_kb": 64, 00:26:45.397 "state": "online", 00:26:45.397 "raid_level": "raid0", 00:26:45.397 "superblock": true, 00:26:45.397 "num_base_bdevs": 4, 00:26:45.397 "num_base_bdevs_discovered": 4, 00:26:45.397 "num_base_bdevs_operational": 4, 00:26:45.397 "base_bdevs_list": [ 00:26:45.397 { 00:26:45.397 "name": "BaseBdev1", 00:26:45.397 "uuid": "4b5f3182-48f8-4113-bb41-0ad343b5fe69", 00:26:45.397 "is_configured": true, 00:26:45.397 "data_offset": 2048, 00:26:45.397 "data_size": 63488 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "name": "BaseBdev2", 00:26:45.397 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:45.397 "is_configured": true, 00:26:45.397 "data_offset": 2048, 00:26:45.397 "data_size": 63488 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "name": "BaseBdev3", 00:26:45.397 "uuid": "3b2fcad8-2796-47f9-b152-5b541d934d0c", 00:26:45.397 "is_configured": true, 00:26:45.397 "data_offset": 2048, 00:26:45.397 "data_size": 63488 00:26:45.397 }, 00:26:45.397 { 00:26:45.397 "name": "BaseBdev4", 00:26:45.397 "uuid": "f1124859-0be7-4dbf-9883-244553b98fc6", 00:26:45.397 "is_configured": true, 00:26:45.397 "data_offset": 2048, 00:26:45.397 "data_size": 63488 00:26:45.397 } 00:26:45.397 ] 00:26:45.397 } 00:26:45.397 } 00:26:45.397 }' 00:26:45.397 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:45.397 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:45.397 BaseBdev2 00:26:45.397 BaseBdev3 00:26:45.397 BaseBdev4' 00:26:45.397 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:45.655 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.656 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.656 [2024-11-26 17:23:15.737782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:45.656 [2024-11-26 17:23:15.737930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:45.656 [2024-11-26 17:23:15.738104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:45.915 "name": "Existed_Raid", 00:26:45.915 "uuid": "6d1aa663-bac5-40c7-badf-a4d552a8c368", 00:26:45.915 "strip_size_kb": 64, 00:26:45.915 "state": "offline", 00:26:45.915 "raid_level": "raid0", 00:26:45.915 "superblock": true, 00:26:45.915 "num_base_bdevs": 4, 00:26:45.915 "num_base_bdevs_discovered": 3, 00:26:45.915 "num_base_bdevs_operational": 3, 00:26:45.915 "base_bdevs_list": [ 00:26:45.915 { 00:26:45.915 "name": null, 00:26:45.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.915 "is_configured": false, 00:26:45.915 "data_offset": 0, 00:26:45.915 "data_size": 63488 00:26:45.915 }, 00:26:45.915 { 00:26:45.915 "name": "BaseBdev2", 00:26:45.915 "uuid": "817ce672-782c-45ea-b5cb-1c79875cd40b", 00:26:45.915 "is_configured": true, 00:26:45.915 "data_offset": 2048, 00:26:45.915 "data_size": 63488 00:26:45.915 }, 00:26:45.915 { 00:26:45.915 "name": "BaseBdev3", 00:26:45.915 "uuid": "3b2fcad8-2796-47f9-b152-5b541d934d0c", 00:26:45.915 "is_configured": true, 00:26:45.915 "data_offset": 2048, 00:26:45.915 "data_size": 63488 00:26:45.915 }, 00:26:45.915 { 00:26:45.915 "name": "BaseBdev4", 00:26:45.915 "uuid": "f1124859-0be7-4dbf-9883-244553b98fc6", 00:26:45.915 "is_configured": true, 00:26:45.915 "data_offset": 2048, 00:26:45.915 "data_size": 63488 00:26:45.915 } 00:26:45.915 ] 00:26:45.915 }' 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:45.915 17:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:46.175 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.435 [2024-11-26 17:23:16.311972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.435 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.435 [2024-11-26 17:23:16.464226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.694 [2024-11-26 17:23:16.616452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:46.694 [2024-11-26 17:23:16.616643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.694 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.954 BaseBdev2 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.954 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.954 [ 00:26:46.954 { 00:26:46.954 "name": "BaseBdev2", 00:26:46.954 "aliases": [ 00:26:46.954 "d1159a80-6335-477d-bd17-4abe503c0136" 00:26:46.954 ], 00:26:46.954 "product_name": "Malloc disk", 00:26:46.954 "block_size": 512, 00:26:46.954 "num_blocks": 65536, 00:26:46.954 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:46.954 "assigned_rate_limits": { 00:26:46.954 "rw_ios_per_sec": 0, 00:26:46.954 "rw_mbytes_per_sec": 0, 00:26:46.954 "r_mbytes_per_sec": 0, 00:26:46.954 "w_mbytes_per_sec": 0 00:26:46.954 }, 00:26:46.954 "claimed": false, 00:26:46.954 "zoned": false, 00:26:46.954 "supported_io_types": { 00:26:46.954 "read": true, 00:26:46.954 "write": true, 00:26:46.954 "unmap": true, 00:26:46.954 "flush": true, 00:26:46.954 "reset": true, 00:26:46.954 "nvme_admin": false, 00:26:46.954 "nvme_io": false, 00:26:46.954 "nvme_io_md": false, 00:26:46.954 "write_zeroes": true, 00:26:46.954 "zcopy": true, 00:26:46.954 "get_zone_info": false, 00:26:46.954 "zone_management": false, 00:26:46.954 "zone_append": false, 00:26:46.954 "compare": false, 00:26:46.954 "compare_and_write": false, 00:26:46.955 "abort": true, 00:26:46.955 "seek_hole": false, 00:26:46.955 "seek_data": false, 00:26:46.955 "copy": true, 00:26:46.955 "nvme_iov_md": false 00:26:46.955 }, 00:26:46.955 "memory_domains": [ 00:26:46.955 { 00:26:46.955 "dma_device_id": "system", 00:26:46.955 "dma_device_type": 1 00:26:46.955 }, 00:26:46.955 { 00:26:46.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.955 "dma_device_type": 2 00:26:46.955 } 00:26:46.955 ], 00:26:46.955 "driver_specific": {} 00:26:46.955 } 00:26:46.955 ] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 BaseBdev3 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 [ 00:26:46.955 { 00:26:46.955 "name": "BaseBdev3", 00:26:46.955 "aliases": [ 00:26:46.955 "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec" 00:26:46.955 ], 00:26:46.955 "product_name": "Malloc disk", 00:26:46.955 "block_size": 512, 00:26:46.955 "num_blocks": 65536, 00:26:46.955 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:46.955 "assigned_rate_limits": { 00:26:46.955 "rw_ios_per_sec": 0, 00:26:46.955 "rw_mbytes_per_sec": 0, 00:26:46.955 "r_mbytes_per_sec": 0, 00:26:46.955 "w_mbytes_per_sec": 0 00:26:46.955 }, 00:26:46.955 "claimed": false, 00:26:46.955 "zoned": false, 00:26:46.955 "supported_io_types": { 00:26:46.955 "read": true, 00:26:46.955 "write": true, 00:26:46.955 "unmap": true, 00:26:46.955 "flush": true, 00:26:46.955 "reset": true, 00:26:46.955 "nvme_admin": false, 00:26:46.955 "nvme_io": false, 00:26:46.955 "nvme_io_md": false, 00:26:46.955 "write_zeroes": true, 00:26:46.955 "zcopy": true, 00:26:46.955 "get_zone_info": false, 00:26:46.955 "zone_management": false, 00:26:46.955 "zone_append": false, 00:26:46.955 "compare": false, 00:26:46.955 "compare_and_write": false, 00:26:46.955 "abort": true, 00:26:46.955 "seek_hole": false, 00:26:46.955 "seek_data": false, 00:26:46.955 "copy": true, 00:26:46.955 "nvme_iov_md": false 00:26:46.955 }, 00:26:46.955 "memory_domains": [ 00:26:46.955 { 00:26:46.955 "dma_device_id": "system", 00:26:46.955 "dma_device_type": 1 00:26:46.955 }, 00:26:46.955 { 00:26:46.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.955 "dma_device_type": 2 00:26:46.955 } 00:26:46.955 ], 00:26:46.955 "driver_specific": {} 00:26:46.955 } 00:26:46.955 ] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 BaseBdev4 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.955 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 [ 00:26:46.955 { 00:26:46.955 "name": "BaseBdev4", 00:26:46.955 "aliases": [ 00:26:46.955 "ae27d4a5-bd98-45a7-9b78-be94accc70a8" 00:26:46.955 ], 00:26:46.955 "product_name": "Malloc disk", 00:26:46.955 "block_size": 512, 00:26:46.955 "num_blocks": 65536, 00:26:46.955 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:46.955 "assigned_rate_limits": { 00:26:46.955 "rw_ios_per_sec": 0, 00:26:46.955 "rw_mbytes_per_sec": 0, 00:26:46.955 "r_mbytes_per_sec": 0, 00:26:46.955 "w_mbytes_per_sec": 0 00:26:46.955 }, 00:26:46.955 "claimed": false, 00:26:46.955 "zoned": false, 00:26:46.955 "supported_io_types": { 00:26:46.955 "read": true, 00:26:46.955 "write": true, 00:26:46.955 "unmap": true, 00:26:46.955 "flush": true, 00:26:46.955 "reset": true, 00:26:46.955 "nvme_admin": false, 00:26:46.955 "nvme_io": false, 00:26:46.955 "nvme_io_md": false, 00:26:46.955 "write_zeroes": true, 00:26:46.955 "zcopy": true, 00:26:46.955 "get_zone_info": false, 00:26:46.955 "zone_management": false, 00:26:46.955 "zone_append": false, 00:26:46.955 "compare": false, 00:26:46.955 "compare_and_write": false, 00:26:46.955 "abort": true, 00:26:46.955 "seek_hole": false, 00:26:46.956 "seek_data": false, 00:26:46.956 "copy": true, 00:26:46.956 "nvme_iov_md": false 00:26:46.956 }, 00:26:46.956 "memory_domains": [ 00:26:46.956 { 00:26:46.956 "dma_device_id": "system", 00:26:46.956 "dma_device_type": 1 00:26:46.956 }, 00:26:46.956 { 00:26:46.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.956 "dma_device_type": 2 00:26:46.956 } 00:26:46.956 ], 00:26:46.956 "driver_specific": {} 00:26:46.956 } 00:26:46.956 ] 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.956 [2024-11-26 17:23:17.057159] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:46.956 [2024-11-26 17:23:17.057858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:46.956 [2024-11-26 17:23:17.057916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:46.956 [2024-11-26 17:23:17.060383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:46.956 [2024-11-26 17:23:17.060439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:46.956 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.216 "name": "Existed_Raid", 00:26:47.216 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:47.216 "strip_size_kb": 64, 00:26:47.216 "state": "configuring", 00:26:47.216 "raid_level": "raid0", 00:26:47.216 "superblock": true, 00:26:47.216 "num_base_bdevs": 4, 00:26:47.216 "num_base_bdevs_discovered": 3, 00:26:47.216 "num_base_bdevs_operational": 4, 00:26:47.216 "base_bdevs_list": [ 00:26:47.216 { 00:26:47.216 "name": "BaseBdev1", 00:26:47.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.216 "is_configured": false, 00:26:47.216 "data_offset": 0, 00:26:47.216 "data_size": 0 00:26:47.216 }, 00:26:47.216 { 00:26:47.216 "name": "BaseBdev2", 00:26:47.216 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:47.216 "is_configured": true, 00:26:47.216 "data_offset": 2048, 00:26:47.216 "data_size": 63488 00:26:47.216 }, 00:26:47.216 { 00:26:47.216 "name": "BaseBdev3", 00:26:47.216 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:47.216 "is_configured": true, 00:26:47.216 "data_offset": 2048, 00:26:47.216 "data_size": 63488 00:26:47.216 }, 00:26:47.216 { 00:26:47.216 "name": "BaseBdev4", 00:26:47.216 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:47.216 "is_configured": true, 00:26:47.216 "data_offset": 2048, 00:26:47.216 "data_size": 63488 00:26:47.216 } 00:26:47.216 ] 00:26:47.216 }' 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.216 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.475 [2024-11-26 17:23:17.508711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.475 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.475 "name": "Existed_Raid", 00:26:47.475 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:47.475 "strip_size_kb": 64, 00:26:47.475 "state": "configuring", 00:26:47.475 "raid_level": "raid0", 00:26:47.475 "superblock": true, 00:26:47.475 "num_base_bdevs": 4, 00:26:47.475 "num_base_bdevs_discovered": 2, 00:26:47.475 "num_base_bdevs_operational": 4, 00:26:47.475 "base_bdevs_list": [ 00:26:47.475 { 00:26:47.475 "name": "BaseBdev1", 00:26:47.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.475 "is_configured": false, 00:26:47.475 "data_offset": 0, 00:26:47.475 "data_size": 0 00:26:47.476 }, 00:26:47.476 { 00:26:47.476 "name": null, 00:26:47.476 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:47.476 "is_configured": false, 00:26:47.476 "data_offset": 0, 00:26:47.476 "data_size": 63488 00:26:47.476 }, 00:26:47.476 { 00:26:47.476 "name": "BaseBdev3", 00:26:47.476 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:47.476 "is_configured": true, 00:26:47.476 "data_offset": 2048, 00:26:47.476 "data_size": 63488 00:26:47.476 }, 00:26:47.476 { 00:26:47.476 "name": "BaseBdev4", 00:26:47.476 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:47.476 "is_configured": true, 00:26:47.476 "data_offset": 2048, 00:26:47.476 "data_size": 63488 00:26:47.476 } 00:26:47.476 ] 00:26:47.476 }' 00:26:47.476 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.476 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.045 17:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 [2024-11-26 17:23:18.028256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:48.045 BaseBdev1 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 [ 00:26:48.045 { 00:26:48.045 "name": "BaseBdev1", 00:26:48.045 "aliases": [ 00:26:48.045 "72b07fda-26c1-48c3-aaaa-99ceef98a23e" 00:26:48.045 ], 00:26:48.045 "product_name": "Malloc disk", 00:26:48.045 "block_size": 512, 00:26:48.045 "num_blocks": 65536, 00:26:48.045 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:48.045 "assigned_rate_limits": { 00:26:48.045 "rw_ios_per_sec": 0, 00:26:48.045 "rw_mbytes_per_sec": 0, 00:26:48.045 "r_mbytes_per_sec": 0, 00:26:48.045 "w_mbytes_per_sec": 0 00:26:48.045 }, 00:26:48.045 "claimed": true, 00:26:48.045 "claim_type": "exclusive_write", 00:26:48.045 "zoned": false, 00:26:48.045 "supported_io_types": { 00:26:48.045 "read": true, 00:26:48.045 "write": true, 00:26:48.045 "unmap": true, 00:26:48.045 "flush": true, 00:26:48.045 "reset": true, 00:26:48.045 "nvme_admin": false, 00:26:48.045 "nvme_io": false, 00:26:48.045 "nvme_io_md": false, 00:26:48.045 "write_zeroes": true, 00:26:48.045 "zcopy": true, 00:26:48.045 "get_zone_info": false, 00:26:48.045 "zone_management": false, 00:26:48.045 "zone_append": false, 00:26:48.045 "compare": false, 00:26:48.045 "compare_and_write": false, 00:26:48.045 "abort": true, 00:26:48.045 "seek_hole": false, 00:26:48.045 "seek_data": false, 00:26:48.045 "copy": true, 00:26:48.045 "nvme_iov_md": false 00:26:48.045 }, 00:26:48.045 "memory_domains": [ 00:26:48.045 { 00:26:48.045 "dma_device_id": "system", 00:26:48.045 "dma_device_type": 1 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.045 "dma_device_type": 2 00:26:48.045 } 00:26:48.045 ], 00:26:48.045 "driver_specific": {} 00:26:48.045 } 00:26:48.045 ] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.045 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.045 "name": "Existed_Raid", 00:26:48.045 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:48.045 "strip_size_kb": 64, 00:26:48.045 "state": "configuring", 00:26:48.045 "raid_level": "raid0", 00:26:48.045 "superblock": true, 00:26:48.045 "num_base_bdevs": 4, 00:26:48.045 "num_base_bdevs_discovered": 3, 00:26:48.045 "num_base_bdevs_operational": 4, 00:26:48.045 "base_bdevs_list": [ 00:26:48.045 { 00:26:48.045 "name": "BaseBdev1", 00:26:48.045 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:48.045 "is_configured": true, 00:26:48.045 "data_offset": 2048, 00:26:48.045 "data_size": 63488 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "name": null, 00:26:48.045 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:48.045 "is_configured": false, 00:26:48.045 "data_offset": 0, 00:26:48.045 "data_size": 63488 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "name": "BaseBdev3", 00:26:48.045 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:48.045 "is_configured": true, 00:26:48.045 "data_offset": 2048, 00:26:48.045 "data_size": 63488 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "name": "BaseBdev4", 00:26:48.045 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:48.045 "is_configured": true, 00:26:48.045 "data_offset": 2048, 00:26:48.045 "data_size": 63488 00:26:48.045 } 00:26:48.045 ] 00:26:48.045 }' 00:26:48.046 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.046 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 [2024-11-26 17:23:18.619564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.613 "name": "Existed_Raid", 00:26:48.613 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:48.613 "strip_size_kb": 64, 00:26:48.613 "state": "configuring", 00:26:48.613 "raid_level": "raid0", 00:26:48.613 "superblock": true, 00:26:48.613 "num_base_bdevs": 4, 00:26:48.613 "num_base_bdevs_discovered": 2, 00:26:48.613 "num_base_bdevs_operational": 4, 00:26:48.613 "base_bdevs_list": [ 00:26:48.613 { 00:26:48.613 "name": "BaseBdev1", 00:26:48.613 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:48.613 "is_configured": true, 00:26:48.613 "data_offset": 2048, 00:26:48.613 "data_size": 63488 00:26:48.613 }, 00:26:48.613 { 00:26:48.613 "name": null, 00:26:48.613 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:48.613 "is_configured": false, 00:26:48.613 "data_offset": 0, 00:26:48.613 "data_size": 63488 00:26:48.613 }, 00:26:48.613 { 00:26:48.613 "name": null, 00:26:48.613 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:48.613 "is_configured": false, 00:26:48.613 "data_offset": 0, 00:26:48.613 "data_size": 63488 00:26:48.613 }, 00:26:48.613 { 00:26:48.613 "name": "BaseBdev4", 00:26:48.613 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:48.613 "is_configured": true, 00:26:48.613 "data_offset": 2048, 00:26:48.613 "data_size": 63488 00:26:48.613 } 00:26:48.613 ] 00:26:48.613 }' 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.613 17:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 [2024-11-26 17:23:19.130808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.182 "name": "Existed_Raid", 00:26:49.182 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:49.182 "strip_size_kb": 64, 00:26:49.182 "state": "configuring", 00:26:49.182 "raid_level": "raid0", 00:26:49.182 "superblock": true, 00:26:49.182 "num_base_bdevs": 4, 00:26:49.182 "num_base_bdevs_discovered": 3, 00:26:49.182 "num_base_bdevs_operational": 4, 00:26:49.182 "base_bdevs_list": [ 00:26:49.182 { 00:26:49.182 "name": "BaseBdev1", 00:26:49.182 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:49.182 "is_configured": true, 00:26:49.182 "data_offset": 2048, 00:26:49.182 "data_size": 63488 00:26:49.182 }, 00:26:49.182 { 00:26:49.182 "name": null, 00:26:49.182 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:49.182 "is_configured": false, 00:26:49.182 "data_offset": 0, 00:26:49.182 "data_size": 63488 00:26:49.182 }, 00:26:49.182 { 00:26:49.182 "name": "BaseBdev3", 00:26:49.182 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:49.182 "is_configured": true, 00:26:49.182 "data_offset": 2048, 00:26:49.182 "data_size": 63488 00:26:49.182 }, 00:26:49.182 { 00:26:49.182 "name": "BaseBdev4", 00:26:49.182 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:49.182 "is_configured": true, 00:26:49.182 "data_offset": 2048, 00:26:49.182 "data_size": 63488 00:26:49.182 } 00:26:49.182 ] 00:26:49.182 }' 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.182 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.442 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.442 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.442 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.442 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.701 [2024-11-26 17:23:19.586220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.701 "name": "Existed_Raid", 00:26:49.701 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:49.701 "strip_size_kb": 64, 00:26:49.701 "state": "configuring", 00:26:49.701 "raid_level": "raid0", 00:26:49.701 "superblock": true, 00:26:49.701 "num_base_bdevs": 4, 00:26:49.701 "num_base_bdevs_discovered": 2, 00:26:49.701 "num_base_bdevs_operational": 4, 00:26:49.701 "base_bdevs_list": [ 00:26:49.701 { 00:26:49.701 "name": null, 00:26:49.701 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:49.701 "is_configured": false, 00:26:49.701 "data_offset": 0, 00:26:49.701 "data_size": 63488 00:26:49.701 }, 00:26:49.701 { 00:26:49.701 "name": null, 00:26:49.701 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:49.701 "is_configured": false, 00:26:49.701 "data_offset": 0, 00:26:49.701 "data_size": 63488 00:26:49.701 }, 00:26:49.701 { 00:26:49.701 "name": "BaseBdev3", 00:26:49.701 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:49.701 "is_configured": true, 00:26:49.701 "data_offset": 2048, 00:26:49.701 "data_size": 63488 00:26:49.701 }, 00:26:49.701 { 00:26:49.701 "name": "BaseBdev4", 00:26:49.701 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:49.701 "is_configured": true, 00:26:49.701 "data_offset": 2048, 00:26:49.701 "data_size": 63488 00:26:49.701 } 00:26:49.701 ] 00:26:49.701 }' 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.701 17:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.268 [2024-11-26 17:23:20.180745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.268 "name": "Existed_Raid", 00:26:50.268 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:50.268 "strip_size_kb": 64, 00:26:50.268 "state": "configuring", 00:26:50.268 "raid_level": "raid0", 00:26:50.268 "superblock": true, 00:26:50.268 "num_base_bdevs": 4, 00:26:50.268 "num_base_bdevs_discovered": 3, 00:26:50.268 "num_base_bdevs_operational": 4, 00:26:50.268 "base_bdevs_list": [ 00:26:50.268 { 00:26:50.268 "name": null, 00:26:50.268 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:50.268 "is_configured": false, 00:26:50.268 "data_offset": 0, 00:26:50.268 "data_size": 63488 00:26:50.268 }, 00:26:50.268 { 00:26:50.268 "name": "BaseBdev2", 00:26:50.268 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:50.268 "is_configured": true, 00:26:50.268 "data_offset": 2048, 00:26:50.268 "data_size": 63488 00:26:50.268 }, 00:26:50.268 { 00:26:50.268 "name": "BaseBdev3", 00:26:50.268 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:50.268 "is_configured": true, 00:26:50.268 "data_offset": 2048, 00:26:50.268 "data_size": 63488 00:26:50.268 }, 00:26:50.268 { 00:26:50.268 "name": "BaseBdev4", 00:26:50.268 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:50.268 "is_configured": true, 00:26:50.268 "data_offset": 2048, 00:26:50.268 "data_size": 63488 00:26:50.268 } 00:26:50.268 ] 00:26:50.268 }' 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.268 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.527 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72b07fda-26c1-48c3-aaaa-99ceef98a23e 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.785 [2024-11-26 17:23:20.732419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:50.785 [2024-11-26 17:23:20.732737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:50.785 [2024-11-26 17:23:20.732755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:50.785 [2024-11-26 17:23:20.733064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:50.785 [2024-11-26 17:23:20.733201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:50.785 [2024-11-26 17:23:20.733214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:50.785 NewBaseBdev 00:26:50.785 [2024-11-26 17:23:20.733353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.785 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.786 [ 00:26:50.786 { 00:26:50.786 "name": "NewBaseBdev", 00:26:50.786 "aliases": [ 00:26:50.786 "72b07fda-26c1-48c3-aaaa-99ceef98a23e" 00:26:50.786 ], 00:26:50.786 "product_name": "Malloc disk", 00:26:50.786 "block_size": 512, 00:26:50.786 "num_blocks": 65536, 00:26:50.786 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:50.786 "assigned_rate_limits": { 00:26:50.786 "rw_ios_per_sec": 0, 00:26:50.786 "rw_mbytes_per_sec": 0, 00:26:50.786 "r_mbytes_per_sec": 0, 00:26:50.786 "w_mbytes_per_sec": 0 00:26:50.786 }, 00:26:50.786 "claimed": true, 00:26:50.786 "claim_type": "exclusive_write", 00:26:50.786 "zoned": false, 00:26:50.786 "supported_io_types": { 00:26:50.786 "read": true, 00:26:50.786 "write": true, 00:26:50.786 "unmap": true, 00:26:50.786 "flush": true, 00:26:50.786 "reset": true, 00:26:50.786 "nvme_admin": false, 00:26:50.786 "nvme_io": false, 00:26:50.786 "nvme_io_md": false, 00:26:50.786 "write_zeroes": true, 00:26:50.786 "zcopy": true, 00:26:50.786 "get_zone_info": false, 00:26:50.786 "zone_management": false, 00:26:50.786 "zone_append": false, 00:26:50.786 "compare": false, 00:26:50.786 "compare_and_write": false, 00:26:50.786 "abort": true, 00:26:50.786 "seek_hole": false, 00:26:50.786 "seek_data": false, 00:26:50.786 "copy": true, 00:26:50.786 "nvme_iov_md": false 00:26:50.786 }, 00:26:50.786 "memory_domains": [ 00:26:50.786 { 00:26:50.786 "dma_device_id": "system", 00:26:50.786 "dma_device_type": 1 00:26:50.786 }, 00:26:50.786 { 00:26:50.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.786 "dma_device_type": 2 00:26:50.786 } 00:26:50.786 ], 00:26:50.786 "driver_specific": {} 00:26:50.786 } 00:26:50.786 ] 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.786 "name": "Existed_Raid", 00:26:50.786 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:50.786 "strip_size_kb": 64, 00:26:50.786 "state": "online", 00:26:50.786 "raid_level": "raid0", 00:26:50.786 "superblock": true, 00:26:50.786 "num_base_bdevs": 4, 00:26:50.786 "num_base_bdevs_discovered": 4, 00:26:50.786 "num_base_bdevs_operational": 4, 00:26:50.786 "base_bdevs_list": [ 00:26:50.786 { 00:26:50.786 "name": "NewBaseBdev", 00:26:50.786 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:50.786 "is_configured": true, 00:26:50.786 "data_offset": 2048, 00:26:50.786 "data_size": 63488 00:26:50.786 }, 00:26:50.786 { 00:26:50.786 "name": "BaseBdev2", 00:26:50.786 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:50.786 "is_configured": true, 00:26:50.786 "data_offset": 2048, 00:26:50.786 "data_size": 63488 00:26:50.786 }, 00:26:50.786 { 00:26:50.786 "name": "BaseBdev3", 00:26:50.786 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:50.786 "is_configured": true, 00:26:50.786 "data_offset": 2048, 00:26:50.786 "data_size": 63488 00:26:50.786 }, 00:26:50.786 { 00:26:50.786 "name": "BaseBdev4", 00:26:50.786 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:50.786 "is_configured": true, 00:26:50.786 "data_offset": 2048, 00:26:50.786 "data_size": 63488 00:26:50.786 } 00:26:50.786 ] 00:26:50.786 }' 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.786 17:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 [2024-11-26 17:23:21.260183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:51.352 "name": "Existed_Raid", 00:26:51.352 "aliases": [ 00:26:51.352 "3925fc19-97b6-4030-9dd0-8c62a0bf8114" 00:26:51.352 ], 00:26:51.352 "product_name": "Raid Volume", 00:26:51.352 "block_size": 512, 00:26:51.352 "num_blocks": 253952, 00:26:51.352 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:51.352 "assigned_rate_limits": { 00:26:51.352 "rw_ios_per_sec": 0, 00:26:51.352 "rw_mbytes_per_sec": 0, 00:26:51.352 "r_mbytes_per_sec": 0, 00:26:51.352 "w_mbytes_per_sec": 0 00:26:51.352 }, 00:26:51.352 "claimed": false, 00:26:51.352 "zoned": false, 00:26:51.352 "supported_io_types": { 00:26:51.352 "read": true, 00:26:51.352 "write": true, 00:26:51.352 "unmap": true, 00:26:51.352 "flush": true, 00:26:51.352 "reset": true, 00:26:51.352 "nvme_admin": false, 00:26:51.352 "nvme_io": false, 00:26:51.352 "nvme_io_md": false, 00:26:51.352 "write_zeroes": true, 00:26:51.352 "zcopy": false, 00:26:51.352 "get_zone_info": false, 00:26:51.352 "zone_management": false, 00:26:51.352 "zone_append": false, 00:26:51.352 "compare": false, 00:26:51.352 "compare_and_write": false, 00:26:51.352 "abort": false, 00:26:51.352 "seek_hole": false, 00:26:51.352 "seek_data": false, 00:26:51.352 "copy": false, 00:26:51.352 "nvme_iov_md": false 00:26:51.352 }, 00:26:51.352 "memory_domains": [ 00:26:51.352 { 00:26:51.352 "dma_device_id": "system", 00:26:51.352 "dma_device_type": 1 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.352 "dma_device_type": 2 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "system", 00:26:51.352 "dma_device_type": 1 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.352 "dma_device_type": 2 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "system", 00:26:51.352 "dma_device_type": 1 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.352 "dma_device_type": 2 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "system", 00:26:51.352 "dma_device_type": 1 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.352 "dma_device_type": 2 00:26:51.352 } 00:26:51.352 ], 00:26:51.352 "driver_specific": { 00:26:51.352 "raid": { 00:26:51.352 "uuid": "3925fc19-97b6-4030-9dd0-8c62a0bf8114", 00:26:51.352 "strip_size_kb": 64, 00:26:51.352 "state": "online", 00:26:51.352 "raid_level": "raid0", 00:26:51.352 "superblock": true, 00:26:51.352 "num_base_bdevs": 4, 00:26:51.352 "num_base_bdevs_discovered": 4, 00:26:51.352 "num_base_bdevs_operational": 4, 00:26:51.352 "base_bdevs_list": [ 00:26:51.352 { 00:26:51.352 "name": "NewBaseBdev", 00:26:51.352 "uuid": "72b07fda-26c1-48c3-aaaa-99ceef98a23e", 00:26:51.352 "is_configured": true, 00:26:51.352 "data_offset": 2048, 00:26:51.352 "data_size": 63488 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "name": "BaseBdev2", 00:26:51.352 "uuid": "d1159a80-6335-477d-bd17-4abe503c0136", 00:26:51.352 "is_configured": true, 00:26:51.352 "data_offset": 2048, 00:26:51.352 "data_size": 63488 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "name": "BaseBdev3", 00:26:51.352 "uuid": "8e00e3eb-02ce-4cdc-acbd-fe90129ddaec", 00:26:51.352 "is_configured": true, 00:26:51.352 "data_offset": 2048, 00:26:51.352 "data_size": 63488 00:26:51.352 }, 00:26:51.352 { 00:26:51.352 "name": "BaseBdev4", 00:26:51.352 "uuid": "ae27d4a5-bd98-45a7-9b78-be94accc70a8", 00:26:51.352 "is_configured": true, 00:26:51.352 "data_offset": 2048, 00:26:51.352 "data_size": 63488 00:26:51.352 } 00:26:51.352 ] 00:26:51.352 } 00:26:51.352 } 00:26:51.352 }' 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:51.352 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:51.352 BaseBdev2 00:26:51.353 BaseBdev3 00:26:51.353 BaseBdev4' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.353 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.610 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.610 [2024-11-26 17:23:21.579313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:51.611 [2024-11-26 17:23:21.579482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:51.611 [2024-11-26 17:23:21.579692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:51.611 [2024-11-26 17:23:21.579802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:51.611 [2024-11-26 17:23:21.580040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70154 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70154 ']' 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70154 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70154 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70154' 00:26:51.611 killing process with pid 70154 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70154 00:26:51.611 [2024-11-26 17:23:21.634637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:51.611 17:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70154 00:26:52.175 [2024-11-26 17:23:22.044430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:53.548 17:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:53.548 00:26:53.548 real 0m11.822s 00:26:53.548 user 0m18.607s 00:26:53.548 sys 0m2.522s 00:26:53.548 17:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.548 17:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.548 ************************************ 00:26:53.548 END TEST raid_state_function_test_sb 00:26:53.548 ************************************ 00:26:53.548 17:23:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:26:53.548 17:23:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:53.548 17:23:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.548 17:23:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:53.548 ************************************ 00:26:53.548 START TEST raid_superblock_test 00:26:53.548 ************************************ 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70834 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70834 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70834 ']' 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.548 17:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.548 [2024-11-26 17:23:23.448035] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:53.548 [2024-11-26 17:23:23.448378] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70834 ] 00:26:53.548 [2024-11-26 17:23:23.635847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.807 [2024-11-26 17:23:23.779301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.065 [2024-11-26 17:23:24.001279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:54.065 [2024-11-26 17:23:24.001354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:54.323 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.323 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.324 malloc1 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.324 [2024-11-26 17:23:24.351998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:54.324 [2024-11-26 17:23:24.352210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.324 [2024-11-26 17:23:24.352280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:54.324 [2024-11-26 17:23:24.352381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.324 [2024-11-26 17:23:24.355417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.324 [2024-11-26 17:23:24.355607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:54.324 pt1 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.324 malloc2 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.324 [2024-11-26 17:23:24.413035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:54.324 [2024-11-26 17:23:24.413216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.324 [2024-11-26 17:23:24.413287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:54.324 [2024-11-26 17:23:24.413394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.324 [2024-11-26 17:23:24.416066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.324 [2024-11-26 17:23:24.416213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:54.324 pt2 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.324 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.583 malloc3 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.583 [2024-11-26 17:23:24.485921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:54.583 [2024-11-26 17:23:24.486097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.583 [2024-11-26 17:23:24.486159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:54.583 [2024-11-26 17:23:24.486235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.583 [2024-11-26 17:23:24.488895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.583 [2024-11-26 17:23:24.489025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:54.583 pt3 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.583 malloc4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.583 [2024-11-26 17:23:24.548723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:54.583 [2024-11-26 17:23:24.548904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.583 [2024-11-26 17:23:24.548973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:54.583 [2024-11-26 17:23:24.549078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.583 [2024-11-26 17:23:24.551781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.583 [2024-11-26 17:23:24.551914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:54.583 pt4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.583 [2024-11-26 17:23:24.564816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:54.583 [2024-11-26 17:23:24.567136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:54.583 [2024-11-26 17:23:24.567230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:54.583 [2024-11-26 17:23:24.567285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:54.583 [2024-11-26 17:23:24.567469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:54.583 [2024-11-26 17:23:24.567482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:54.583 [2024-11-26 17:23:24.567784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:54.583 [2024-11-26 17:23:24.567957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:54.583 [2024-11-26 17:23:24.567972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:54.583 [2024-11-26 17:23:24.568136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:26:54.583 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.584 "name": "raid_bdev1", 00:26:54.584 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:54.584 "strip_size_kb": 64, 00:26:54.584 "state": "online", 00:26:54.584 "raid_level": "raid0", 00:26:54.584 "superblock": true, 00:26:54.584 "num_base_bdevs": 4, 00:26:54.584 "num_base_bdevs_discovered": 4, 00:26:54.584 "num_base_bdevs_operational": 4, 00:26:54.584 "base_bdevs_list": [ 00:26:54.584 { 00:26:54.584 "name": "pt1", 00:26:54.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:54.584 "is_configured": true, 00:26:54.584 "data_offset": 2048, 00:26:54.584 "data_size": 63488 00:26:54.584 }, 00:26:54.584 { 00:26:54.584 "name": "pt2", 00:26:54.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:54.584 "is_configured": true, 00:26:54.584 "data_offset": 2048, 00:26:54.584 "data_size": 63488 00:26:54.584 }, 00:26:54.584 { 00:26:54.584 "name": "pt3", 00:26:54.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:54.584 "is_configured": true, 00:26:54.584 "data_offset": 2048, 00:26:54.584 "data_size": 63488 00:26:54.584 }, 00:26:54.584 { 00:26:54.584 "name": "pt4", 00:26:54.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:54.584 "is_configured": true, 00:26:54.584 "data_offset": 2048, 00:26:54.584 "data_size": 63488 00:26:54.584 } 00:26:54.584 ] 00:26:54.584 }' 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.584 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.150 17:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.150 [2024-11-26 17:23:24.996640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.150 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.150 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.150 "name": "raid_bdev1", 00:26:55.150 "aliases": [ 00:26:55.150 "394cbaa6-b04f-4c05-a321-02b4a4326d68" 00:26:55.150 ], 00:26:55.150 "product_name": "Raid Volume", 00:26:55.150 "block_size": 512, 00:26:55.150 "num_blocks": 253952, 00:26:55.150 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:55.150 "assigned_rate_limits": { 00:26:55.150 "rw_ios_per_sec": 0, 00:26:55.150 "rw_mbytes_per_sec": 0, 00:26:55.150 "r_mbytes_per_sec": 0, 00:26:55.150 "w_mbytes_per_sec": 0 00:26:55.150 }, 00:26:55.150 "claimed": false, 00:26:55.150 "zoned": false, 00:26:55.150 "supported_io_types": { 00:26:55.150 "read": true, 00:26:55.150 "write": true, 00:26:55.150 "unmap": true, 00:26:55.150 "flush": true, 00:26:55.150 "reset": true, 00:26:55.150 "nvme_admin": false, 00:26:55.150 "nvme_io": false, 00:26:55.150 "nvme_io_md": false, 00:26:55.150 "write_zeroes": true, 00:26:55.150 "zcopy": false, 00:26:55.150 "get_zone_info": false, 00:26:55.150 "zone_management": false, 00:26:55.150 "zone_append": false, 00:26:55.150 "compare": false, 00:26:55.150 "compare_and_write": false, 00:26:55.150 "abort": false, 00:26:55.150 "seek_hole": false, 00:26:55.150 "seek_data": false, 00:26:55.150 "copy": false, 00:26:55.150 "nvme_iov_md": false 00:26:55.150 }, 00:26:55.150 "memory_domains": [ 00:26:55.150 { 00:26:55.150 "dma_device_id": "system", 00:26:55.150 "dma_device_type": 1 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.150 "dma_device_type": 2 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "system", 00:26:55.150 "dma_device_type": 1 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.150 "dma_device_type": 2 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "system", 00:26:55.150 "dma_device_type": 1 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.150 "dma_device_type": 2 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "system", 00:26:55.150 "dma_device_type": 1 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.150 "dma_device_type": 2 00:26:55.150 } 00:26:55.150 ], 00:26:55.150 "driver_specific": { 00:26:55.150 "raid": { 00:26:55.150 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:55.150 "strip_size_kb": 64, 00:26:55.150 "state": "online", 00:26:55.150 "raid_level": "raid0", 00:26:55.150 "superblock": true, 00:26:55.150 "num_base_bdevs": 4, 00:26:55.150 "num_base_bdevs_discovered": 4, 00:26:55.150 "num_base_bdevs_operational": 4, 00:26:55.150 "base_bdevs_list": [ 00:26:55.150 { 00:26:55.150 "name": "pt1", 00:26:55.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:55.150 "is_configured": true, 00:26:55.150 "data_offset": 2048, 00:26:55.150 "data_size": 63488 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "name": "pt2", 00:26:55.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:55.150 "is_configured": true, 00:26:55.150 "data_offset": 2048, 00:26:55.150 "data_size": 63488 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "name": "pt3", 00:26:55.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:55.150 "is_configured": true, 00:26:55.150 "data_offset": 2048, 00:26:55.150 "data_size": 63488 00:26:55.150 }, 00:26:55.150 { 00:26:55.150 "name": "pt4", 00:26:55.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:55.150 "is_configured": true, 00:26:55.150 "data_offset": 2048, 00:26:55.150 "data_size": 63488 00:26:55.150 } 00:26:55.150 ] 00:26:55.150 } 00:26:55.150 } 00:26:55.150 }' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:55.151 pt2 00:26:55.151 pt3 00:26:55.151 pt4' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.151 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 [2024-11-26 17:23:25.348113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=394cbaa6-b04f-4c05-a321-02b4a4326d68 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 394cbaa6-b04f-4c05-a321-02b4a4326d68 ']' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 [2024-11-26 17:23:25.395751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.410 [2024-11-26 17:23:25.395885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.410 [2024-11-26 17:23:25.396053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.410 [2024-11-26 17:23:25.396149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.410 [2024-11-26 17:23:25.396171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.410 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.670 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.670 [2024-11-26 17:23:25.563546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:55.670 [2024-11-26 17:23:25.566049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:55.670 [2024-11-26 17:23:25.566237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:55.670 [2024-11-26 17:23:25.566289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:55.670 [2024-11-26 17:23:25.566347] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:55.670 [2024-11-26 17:23:25.566413] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:55.670 [2024-11-26 17:23:25.566438] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:55.670 [2024-11-26 17:23:25.566463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:55.670 [2024-11-26 17:23:25.566483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.670 [2024-11-26 17:23:25.566501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:55.670 request: 00:26:55.670 { 00:26:55.670 "name": "raid_bdev1", 00:26:55.670 "raid_level": "raid0", 00:26:55.671 "base_bdevs": [ 00:26:55.671 "malloc1", 00:26:55.671 "malloc2", 00:26:55.671 "malloc3", 00:26:55.671 "malloc4" 00:26:55.671 ], 00:26:55.671 "strip_size_kb": 64, 00:26:55.671 "superblock": false, 00:26:55.671 "method": "bdev_raid_create", 00:26:55.671 "req_id": 1 00:26:55.671 } 00:26:55.671 Got JSON-RPC error response 00:26:55.671 response: 00:26:55.671 { 00:26:55.671 "code": -17, 00:26:55.671 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:55.671 } 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.671 [2024-11-26 17:23:25.631411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:55.671 [2024-11-26 17:23:25.631627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.671 [2024-11-26 17:23:25.631704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:55.671 [2024-11-26 17:23:25.631782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.671 [2024-11-26 17:23:25.634681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.671 [2024-11-26 17:23:25.634725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:55.671 [2024-11-26 17:23:25.634826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:55.671 [2024-11-26 17:23:25.634891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:55.671 pt1 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.671 "name": "raid_bdev1", 00:26:55.671 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:55.671 "strip_size_kb": 64, 00:26:55.671 "state": "configuring", 00:26:55.671 "raid_level": "raid0", 00:26:55.671 "superblock": true, 00:26:55.671 "num_base_bdevs": 4, 00:26:55.671 "num_base_bdevs_discovered": 1, 00:26:55.671 "num_base_bdevs_operational": 4, 00:26:55.671 "base_bdevs_list": [ 00:26:55.671 { 00:26:55.671 "name": "pt1", 00:26:55.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:55.671 "is_configured": true, 00:26:55.671 "data_offset": 2048, 00:26:55.671 "data_size": 63488 00:26:55.671 }, 00:26:55.671 { 00:26:55.671 "name": null, 00:26:55.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:55.671 "is_configured": false, 00:26:55.671 "data_offset": 2048, 00:26:55.671 "data_size": 63488 00:26:55.671 }, 00:26:55.671 { 00:26:55.671 "name": null, 00:26:55.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:55.671 "is_configured": false, 00:26:55.671 "data_offset": 2048, 00:26:55.671 "data_size": 63488 00:26:55.671 }, 00:26:55.671 { 00:26:55.671 "name": null, 00:26:55.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:55.671 "is_configured": false, 00:26:55.671 "data_offset": 2048, 00:26:55.671 "data_size": 63488 00:26:55.671 } 00:26:55.671 ] 00:26:55.671 }' 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.671 17:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.238 [2024-11-26 17:23:26.078804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:56.238 [2024-11-26 17:23:26.079031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.238 [2024-11-26 17:23:26.079096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:56.238 [2024-11-26 17:23:26.079214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.238 [2024-11-26 17:23:26.079837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.238 [2024-11-26 17:23:26.080006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:56.238 [2024-11-26 17:23:26.080215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:56.238 [2024-11-26 17:23:26.080359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:56.238 pt2 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.238 [2024-11-26 17:23:26.090779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.238 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.239 "name": "raid_bdev1", 00:26:56.239 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:56.239 "strip_size_kb": 64, 00:26:56.239 "state": "configuring", 00:26:56.239 "raid_level": "raid0", 00:26:56.239 "superblock": true, 00:26:56.239 "num_base_bdevs": 4, 00:26:56.239 "num_base_bdevs_discovered": 1, 00:26:56.239 "num_base_bdevs_operational": 4, 00:26:56.239 "base_bdevs_list": [ 00:26:56.239 { 00:26:56.239 "name": "pt1", 00:26:56.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:56.239 "is_configured": true, 00:26:56.239 "data_offset": 2048, 00:26:56.239 "data_size": 63488 00:26:56.239 }, 00:26:56.239 { 00:26:56.239 "name": null, 00:26:56.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:56.239 "is_configured": false, 00:26:56.239 "data_offset": 0, 00:26:56.239 "data_size": 63488 00:26:56.239 }, 00:26:56.239 { 00:26:56.239 "name": null, 00:26:56.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:56.239 "is_configured": false, 00:26:56.239 "data_offset": 2048, 00:26:56.239 "data_size": 63488 00:26:56.239 }, 00:26:56.239 { 00:26:56.239 "name": null, 00:26:56.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:56.239 "is_configured": false, 00:26:56.239 "data_offset": 2048, 00:26:56.239 "data_size": 63488 00:26:56.239 } 00:26:56.239 ] 00:26:56.239 }' 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.239 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.498 [2024-11-26 17:23:26.530231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:56.498 [2024-11-26 17:23:26.530502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.498 [2024-11-26 17:23:26.530657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:56.498 [2024-11-26 17:23:26.530775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.498 [2024-11-26 17:23:26.531351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.498 [2024-11-26 17:23:26.531384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:56.498 [2024-11-26 17:23:26.531502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:56.498 [2024-11-26 17:23:26.531550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:56.498 pt2 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.498 [2024-11-26 17:23:26.542179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:56.498 [2024-11-26 17:23:26.542244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.498 [2024-11-26 17:23:26.542270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:56.498 [2024-11-26 17:23:26.542284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.498 [2024-11-26 17:23:26.542857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.498 [2024-11-26 17:23:26.542880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:56.498 [2024-11-26 17:23:26.542985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:56.498 [2024-11-26 17:23:26.543019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:56.498 pt3 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.498 [2024-11-26 17:23:26.554142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:56.498 [2024-11-26 17:23:26.554334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.498 [2024-11-26 17:23:26.554398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:56.498 [2024-11-26 17:23:26.554528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.498 [2024-11-26 17:23:26.555128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.498 [2024-11-26 17:23:26.555281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:56.498 [2024-11-26 17:23:26.555474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:56.498 [2024-11-26 17:23:26.555622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:56.498 [2024-11-26 17:23:26.555839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:56.498 [2024-11-26 17:23:26.555934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:56.498 [2024-11-26 17:23:26.556283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:56.498 [2024-11-26 17:23:26.556593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:56.498 [2024-11-26 17:23:26.556707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:56.498 [2024-11-26 17:23:26.556971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.498 pt4 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.498 "name": "raid_bdev1", 00:26:56.498 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:56.498 "strip_size_kb": 64, 00:26:56.498 "state": "online", 00:26:56.498 "raid_level": "raid0", 00:26:56.498 "superblock": true, 00:26:56.498 "num_base_bdevs": 4, 00:26:56.498 "num_base_bdevs_discovered": 4, 00:26:56.498 "num_base_bdevs_operational": 4, 00:26:56.498 "base_bdevs_list": [ 00:26:56.498 { 00:26:56.498 "name": "pt1", 00:26:56.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:56.498 "is_configured": true, 00:26:56.498 "data_offset": 2048, 00:26:56.498 "data_size": 63488 00:26:56.498 }, 00:26:56.498 { 00:26:56.498 "name": "pt2", 00:26:56.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:56.498 "is_configured": true, 00:26:56.498 "data_offset": 2048, 00:26:56.498 "data_size": 63488 00:26:56.498 }, 00:26:56.498 { 00:26:56.498 "name": "pt3", 00:26:56.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:56.498 "is_configured": true, 00:26:56.498 "data_offset": 2048, 00:26:56.498 "data_size": 63488 00:26:56.498 }, 00:26:56.498 { 00:26:56.498 "name": "pt4", 00:26:56.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:56.498 "is_configured": true, 00:26:56.498 "data_offset": 2048, 00:26:56.498 "data_size": 63488 00:26:56.498 } 00:26:56.498 ] 00:26:56.498 }' 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.498 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.066 17:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.066 [2024-11-26 17:23:26.986093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.066 "name": "raid_bdev1", 00:26:57.066 "aliases": [ 00:26:57.066 "394cbaa6-b04f-4c05-a321-02b4a4326d68" 00:26:57.066 ], 00:26:57.066 "product_name": "Raid Volume", 00:26:57.066 "block_size": 512, 00:26:57.066 "num_blocks": 253952, 00:26:57.066 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:57.066 "assigned_rate_limits": { 00:26:57.066 "rw_ios_per_sec": 0, 00:26:57.066 "rw_mbytes_per_sec": 0, 00:26:57.066 "r_mbytes_per_sec": 0, 00:26:57.066 "w_mbytes_per_sec": 0 00:26:57.066 }, 00:26:57.066 "claimed": false, 00:26:57.066 "zoned": false, 00:26:57.066 "supported_io_types": { 00:26:57.066 "read": true, 00:26:57.066 "write": true, 00:26:57.066 "unmap": true, 00:26:57.066 "flush": true, 00:26:57.066 "reset": true, 00:26:57.066 "nvme_admin": false, 00:26:57.066 "nvme_io": false, 00:26:57.066 "nvme_io_md": false, 00:26:57.066 "write_zeroes": true, 00:26:57.066 "zcopy": false, 00:26:57.066 "get_zone_info": false, 00:26:57.066 "zone_management": false, 00:26:57.066 "zone_append": false, 00:26:57.066 "compare": false, 00:26:57.066 "compare_and_write": false, 00:26:57.066 "abort": false, 00:26:57.066 "seek_hole": false, 00:26:57.066 "seek_data": false, 00:26:57.066 "copy": false, 00:26:57.066 "nvme_iov_md": false 00:26:57.066 }, 00:26:57.066 "memory_domains": [ 00:26:57.066 { 00:26:57.066 "dma_device_id": "system", 00:26:57.066 "dma_device_type": 1 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.066 "dma_device_type": 2 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "system", 00:26:57.066 "dma_device_type": 1 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.066 "dma_device_type": 2 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "system", 00:26:57.066 "dma_device_type": 1 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.066 "dma_device_type": 2 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "system", 00:26:57.066 "dma_device_type": 1 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.066 "dma_device_type": 2 00:26:57.066 } 00:26:57.066 ], 00:26:57.066 "driver_specific": { 00:26:57.066 "raid": { 00:26:57.066 "uuid": "394cbaa6-b04f-4c05-a321-02b4a4326d68", 00:26:57.066 "strip_size_kb": 64, 00:26:57.066 "state": "online", 00:26:57.066 "raid_level": "raid0", 00:26:57.066 "superblock": true, 00:26:57.066 "num_base_bdevs": 4, 00:26:57.066 "num_base_bdevs_discovered": 4, 00:26:57.066 "num_base_bdevs_operational": 4, 00:26:57.066 "base_bdevs_list": [ 00:26:57.066 { 00:26:57.066 "name": "pt1", 00:26:57.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:57.066 "is_configured": true, 00:26:57.066 "data_offset": 2048, 00:26:57.066 "data_size": 63488 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "name": "pt2", 00:26:57.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:57.066 "is_configured": true, 00:26:57.066 "data_offset": 2048, 00:26:57.066 "data_size": 63488 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "name": "pt3", 00:26:57.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:57.066 "is_configured": true, 00:26:57.066 "data_offset": 2048, 00:26:57.066 "data_size": 63488 00:26:57.066 }, 00:26:57.066 { 00:26:57.066 "name": "pt4", 00:26:57.066 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:57.066 "is_configured": true, 00:26:57.066 "data_offset": 2048, 00:26:57.066 "data_size": 63488 00:26:57.066 } 00:26:57.066 ] 00:26:57.066 } 00:26:57.066 } 00:26:57.066 }' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:57.066 pt2 00:26:57.066 pt3 00:26:57.066 pt4' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:57.066 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.067 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:57.325 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 [2024-11-26 17:23:27.329983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 394cbaa6-b04f-4c05-a321-02b4a4326d68 '!=' 394cbaa6-b04f-4c05-a321-02b4a4326d68 ']' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70834 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70834 ']' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70834 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70834 00:26:57.326 killing process with pid 70834 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70834' 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70834 00:26:57.326 [2024-11-26 17:23:27.409507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:57.326 [2024-11-26 17:23:27.409654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:57.326 17:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70834 00:26:57.326 [2024-11-26 17:23:27.409750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:57.326 [2024-11-26 17:23:27.409765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:57.891 [2024-11-26 17:23:27.854497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:59.268 17:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:59.268 00:26:59.268 real 0m5.778s 00:26:59.268 user 0m8.116s 00:26:59.268 sys 0m1.241s 00:26:59.268 ************************************ 00:26:59.268 END TEST raid_superblock_test 00:26:59.268 ************************************ 00:26:59.268 17:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.268 17:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.268 17:23:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:26:59.268 17:23:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:59.268 17:23:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.268 17:23:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.268 ************************************ 00:26:59.268 START TEST raid_read_error_test 00:26:59.268 ************************************ 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.D6hq3e08Km 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71098 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71098 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71098 ']' 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.268 17:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.268 [2024-11-26 17:23:29.327665] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:26:59.268 [2024-11-26 17:23:29.328663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71098 ] 00:26:59.559 [2024-11-26 17:23:29.515259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.817 [2024-11-26 17:23:29.668917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.817 [2024-11-26 17:23:29.903184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.817 [2024-11-26 17:23:29.903260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:00.384 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.384 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:00.384 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 BaseBdev1_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 true 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 [2024-11-26 17:23:30.273449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:00.385 [2024-11-26 17:23:30.273662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.385 [2024-11-26 17:23:30.273701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:00.385 [2024-11-26 17:23:30.273717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.385 [2024-11-26 17:23:30.276413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.385 [2024-11-26 17:23:30.276461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:00.385 BaseBdev1 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 BaseBdev2_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 true 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 [2024-11-26 17:23:30.336165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:00.385 [2024-11-26 17:23:30.336365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.385 [2024-11-26 17:23:30.336432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:00.385 [2024-11-26 17:23:30.336544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.385 [2024-11-26 17:23:30.339601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.385 [2024-11-26 17:23:30.339660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:00.385 BaseBdev2 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 BaseBdev3_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 true 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 [2024-11-26 17:23:30.408819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:00.385 [2024-11-26 17:23:30.409014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.385 [2024-11-26 17:23:30.409083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:00.385 [2024-11-26 17:23:30.409106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.385 [2024-11-26 17:23:30.412168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.385 [2024-11-26 17:23:30.412220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:00.385 BaseBdev3 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 BaseBdev4_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 true 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 [2024-11-26 17:23:30.478079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:00.385 [2024-11-26 17:23:30.478269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.385 [2024-11-26 17:23:30.478337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:00.385 [2024-11-26 17:23:30.478421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.385 [2024-11-26 17:23:30.481331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.385 [2024-11-26 17:23:30.481505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:00.385 BaseBdev4 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.385 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 [2024-11-26 17:23:30.490455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.385 [2024-11-26 17:23:30.492927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:00.385 [2024-11-26 17:23:30.493024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:00.385 [2024-11-26 17:23:30.493103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:00.386 [2024-11-26 17:23:30.493361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:00.386 [2024-11-26 17:23:30.493383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:00.386 [2024-11-26 17:23:30.493748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:00.386 [2024-11-26 17:23:30.493961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:00.386 [2024-11-26 17:23:30.493979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:00.386 [2024-11-26 17:23:30.494179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.386 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.386 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:00.386 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:00.386 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.644 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.644 "name": "raid_bdev1", 00:27:00.644 "uuid": "25007462-31e9-43b9-8622-36fe9d95c8b4", 00:27:00.644 "strip_size_kb": 64, 00:27:00.644 "state": "online", 00:27:00.644 "raid_level": "raid0", 00:27:00.644 "superblock": true, 00:27:00.644 "num_base_bdevs": 4, 00:27:00.644 "num_base_bdevs_discovered": 4, 00:27:00.645 "num_base_bdevs_operational": 4, 00:27:00.645 "base_bdevs_list": [ 00:27:00.645 { 00:27:00.645 "name": "BaseBdev1", 00:27:00.645 "uuid": "5945a582-4119-5871-b328-fc16b9657c0e", 00:27:00.645 "is_configured": true, 00:27:00.645 "data_offset": 2048, 00:27:00.645 "data_size": 63488 00:27:00.645 }, 00:27:00.645 { 00:27:00.645 "name": "BaseBdev2", 00:27:00.645 "uuid": "57f452a7-0c66-57f2-b50a-64b6265e5dde", 00:27:00.645 "is_configured": true, 00:27:00.645 "data_offset": 2048, 00:27:00.645 "data_size": 63488 00:27:00.645 }, 00:27:00.645 { 00:27:00.645 "name": "BaseBdev3", 00:27:00.645 "uuid": "52779ca0-e0a9-5b0a-ac05-f27729422325", 00:27:00.645 "is_configured": true, 00:27:00.645 "data_offset": 2048, 00:27:00.645 "data_size": 63488 00:27:00.645 }, 00:27:00.645 { 00:27:00.645 "name": "BaseBdev4", 00:27:00.645 "uuid": "772ad1a8-f462-5c59-a65f-ed3c59d590ab", 00:27:00.645 "is_configured": true, 00:27:00.645 "data_offset": 2048, 00:27:00.645 "data_size": 63488 00:27:00.645 } 00:27:00.645 ] 00:27:00.645 }' 00:27:00.645 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.645 17:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.903 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:00.903 17:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:00.903 [2024-11-26 17:23:30.991470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:01.839 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:01.839 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.840 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.099 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.099 "name": "raid_bdev1", 00:27:02.099 "uuid": "25007462-31e9-43b9-8622-36fe9d95c8b4", 00:27:02.099 "strip_size_kb": 64, 00:27:02.099 "state": "online", 00:27:02.099 "raid_level": "raid0", 00:27:02.099 "superblock": true, 00:27:02.099 "num_base_bdevs": 4, 00:27:02.099 "num_base_bdevs_discovered": 4, 00:27:02.099 "num_base_bdevs_operational": 4, 00:27:02.099 "base_bdevs_list": [ 00:27:02.099 { 00:27:02.099 "name": "BaseBdev1", 00:27:02.099 "uuid": "5945a582-4119-5871-b328-fc16b9657c0e", 00:27:02.099 "is_configured": true, 00:27:02.099 "data_offset": 2048, 00:27:02.099 "data_size": 63488 00:27:02.099 }, 00:27:02.099 { 00:27:02.099 "name": "BaseBdev2", 00:27:02.099 "uuid": "57f452a7-0c66-57f2-b50a-64b6265e5dde", 00:27:02.099 "is_configured": true, 00:27:02.099 "data_offset": 2048, 00:27:02.099 "data_size": 63488 00:27:02.099 }, 00:27:02.099 { 00:27:02.099 "name": "BaseBdev3", 00:27:02.099 "uuid": "52779ca0-e0a9-5b0a-ac05-f27729422325", 00:27:02.099 "is_configured": true, 00:27:02.099 "data_offset": 2048, 00:27:02.099 "data_size": 63488 00:27:02.099 }, 00:27:02.099 { 00:27:02.099 "name": "BaseBdev4", 00:27:02.099 "uuid": "772ad1a8-f462-5c59-a65f-ed3c59d590ab", 00:27:02.099 "is_configured": true, 00:27:02.099 "data_offset": 2048, 00:27:02.099 "data_size": 63488 00:27:02.099 } 00:27:02.099 ] 00:27:02.099 }' 00:27:02.099 17:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.099 17:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.358 [2024-11-26 17:23:32.365008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.358 [2024-11-26 17:23:32.365060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.358 [2024-11-26 17:23:32.367925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.358 [2024-11-26 17:23:32.368001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.358 [2024-11-26 17:23:32.368056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.358 [2024-11-26 17:23:32.368072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:02.358 { 00:27:02.358 "results": [ 00:27:02.358 { 00:27:02.358 "job": "raid_bdev1", 00:27:02.358 "core_mask": "0x1", 00:27:02.358 "workload": "randrw", 00:27:02.358 "percentage": 50, 00:27:02.358 "status": "finished", 00:27:02.358 "queue_depth": 1, 00:27:02.358 "io_size": 131072, 00:27:02.358 "runtime": 1.373202, 00:27:02.358 "iops": 14035.080053772133, 00:27:02.358 "mibps": 1754.3850067215167, 00:27:02.358 "io_failed": 1, 00:27:02.358 "io_timeout": 0, 00:27:02.358 "avg_latency_us": 99.82361122397653, 00:27:02.358 "min_latency_us": 28.37590361445783, 00:27:02.358 "max_latency_us": 1546.2811244979919 00:27:02.358 } 00:27:02.358 ], 00:27:02.358 "core_count": 1 00:27:02.358 } 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71098 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71098 ']' 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71098 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71098 00:27:02.358 killing process with pid 71098 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71098' 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71098 00:27:02.358 [2024-11-26 17:23:32.420502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:02.358 17:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71098 00:27:02.925 [2024-11-26 17:23:32.783936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.D6hq3e08Km 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:27:04.357 00:27:04.357 real 0m4.940s 00:27:04.357 user 0m5.713s 00:27:04.357 sys 0m0.725s 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.357 ************************************ 00:27:04.357 END TEST raid_read_error_test 00:27:04.357 ************************************ 00:27:04.357 17:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.357 17:23:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:27:04.357 17:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:04.357 17:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.357 17:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:04.357 ************************************ 00:27:04.357 START TEST raid_write_error_test 00:27:04.357 ************************************ 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lc22PGZtIJ 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71245 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71245 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71245 ']' 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.357 17:23:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.357 [2024-11-26 17:23:34.350969] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:04.357 [2024-11-26 17:23:34.352013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71245 ] 00:27:04.616 [2024-11-26 17:23:34.555320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.616 [2024-11-26 17:23:34.706141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.874 [2024-11-26 17:23:34.939515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.874 [2024-11-26 17:23:34.939581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.132 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.391 BaseBdev1_malloc 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.391 true 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.391 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.391 [2024-11-26 17:23:35.283328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:05.391 [2024-11-26 17:23:35.283404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.392 [2024-11-26 17:23:35.283429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:05.392 [2024-11-26 17:23:35.283445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.392 [2024-11-26 17:23:35.286039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.392 [2024-11-26 17:23:35.286088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:05.392 BaseBdev1 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 BaseBdev2_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 true 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 [2024-11-26 17:23:35.355566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:05.392 [2024-11-26 17:23:35.355641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.392 [2024-11-26 17:23:35.355661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:05.392 [2024-11-26 17:23:35.355676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.392 [2024-11-26 17:23:35.358210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.392 [2024-11-26 17:23:35.358257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:05.392 BaseBdev2 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 BaseBdev3_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 true 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 [2024-11-26 17:23:35.438158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:05.392 [2024-11-26 17:23:35.438235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.392 [2024-11-26 17:23:35.438256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:05.392 [2024-11-26 17:23:35.438271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.392 [2024-11-26 17:23:35.440858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.392 [2024-11-26 17:23:35.440905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:05.392 BaseBdev3 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.392 BaseBdev4_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.392 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.650 true 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.650 [2024-11-26 17:23:35.511301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:05.650 [2024-11-26 17:23:35.511374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.650 [2024-11-26 17:23:35.511398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:05.650 [2024-11-26 17:23:35.511414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.650 [2024-11-26 17:23:35.514039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.650 [2024-11-26 17:23:35.514092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:05.650 BaseBdev4 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.650 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.651 [2024-11-26 17:23:35.523364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:05.651 [2024-11-26 17:23:35.525658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:05.651 [2024-11-26 17:23:35.525742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:05.651 [2024-11-26 17:23:35.525809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:05.651 [2024-11-26 17:23:35.526034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:05.651 [2024-11-26 17:23:35.526053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:05.651 [2024-11-26 17:23:35.526334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:05.651 [2024-11-26 17:23:35.526526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:05.651 [2024-11-26 17:23:35.526541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:05.651 [2024-11-26 17:23:35.526714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.651 "name": "raid_bdev1", 00:27:05.651 "uuid": "b90ba726-db81-4ad6-8acc-2658d779648f", 00:27:05.651 "strip_size_kb": 64, 00:27:05.651 "state": "online", 00:27:05.651 "raid_level": "raid0", 00:27:05.651 "superblock": true, 00:27:05.651 "num_base_bdevs": 4, 00:27:05.651 "num_base_bdevs_discovered": 4, 00:27:05.651 "num_base_bdevs_operational": 4, 00:27:05.651 "base_bdevs_list": [ 00:27:05.651 { 00:27:05.651 "name": "BaseBdev1", 00:27:05.651 "uuid": "ffb465a4-4352-5414-9024-3f5466c61270", 00:27:05.651 "is_configured": true, 00:27:05.651 "data_offset": 2048, 00:27:05.651 "data_size": 63488 00:27:05.651 }, 00:27:05.651 { 00:27:05.651 "name": "BaseBdev2", 00:27:05.651 "uuid": "d21a2c7a-c5b3-56c2-984d-9b96b3614a3a", 00:27:05.651 "is_configured": true, 00:27:05.651 "data_offset": 2048, 00:27:05.651 "data_size": 63488 00:27:05.651 }, 00:27:05.651 { 00:27:05.651 "name": "BaseBdev3", 00:27:05.651 "uuid": "fe8fec04-7f29-5bfa-b92d-fed2df4a52c4", 00:27:05.651 "is_configured": true, 00:27:05.651 "data_offset": 2048, 00:27:05.651 "data_size": 63488 00:27:05.651 }, 00:27:05.651 { 00:27:05.651 "name": "BaseBdev4", 00:27:05.651 "uuid": "415f3598-2117-558a-9a63-f102857f460e", 00:27:05.651 "is_configured": true, 00:27:05.651 "data_offset": 2048, 00:27:05.651 "data_size": 63488 00:27:05.651 } 00:27:05.651 ] 00:27:05.651 }' 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.651 17:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:05.909 17:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:06.167 [2024-11-26 17:23:36.052015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.103 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.104 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.104 17:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.104 17:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.104 17:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.104 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.104 17:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.104 "name": "raid_bdev1", 00:27:07.104 "uuid": "b90ba726-db81-4ad6-8acc-2658d779648f", 00:27:07.104 "strip_size_kb": 64, 00:27:07.104 "state": "online", 00:27:07.104 "raid_level": "raid0", 00:27:07.104 "superblock": true, 00:27:07.104 "num_base_bdevs": 4, 00:27:07.104 "num_base_bdevs_discovered": 4, 00:27:07.104 "num_base_bdevs_operational": 4, 00:27:07.104 "base_bdevs_list": [ 00:27:07.104 { 00:27:07.104 "name": "BaseBdev1", 00:27:07.104 "uuid": "ffb465a4-4352-5414-9024-3f5466c61270", 00:27:07.104 "is_configured": true, 00:27:07.104 "data_offset": 2048, 00:27:07.104 "data_size": 63488 00:27:07.104 }, 00:27:07.104 { 00:27:07.104 "name": "BaseBdev2", 00:27:07.104 "uuid": "d21a2c7a-c5b3-56c2-984d-9b96b3614a3a", 00:27:07.104 "is_configured": true, 00:27:07.104 "data_offset": 2048, 00:27:07.104 "data_size": 63488 00:27:07.104 }, 00:27:07.104 { 00:27:07.104 "name": "BaseBdev3", 00:27:07.104 "uuid": "fe8fec04-7f29-5bfa-b92d-fed2df4a52c4", 00:27:07.104 "is_configured": true, 00:27:07.104 "data_offset": 2048, 00:27:07.104 "data_size": 63488 00:27:07.104 }, 00:27:07.104 { 00:27:07.104 "name": "BaseBdev4", 00:27:07.104 "uuid": "415f3598-2117-558a-9a63-f102857f460e", 00:27:07.104 "is_configured": true, 00:27:07.104 "data_offset": 2048, 00:27:07.104 "data_size": 63488 00:27:07.104 } 00:27:07.104 ] 00:27:07.104 }' 00:27:07.104 17:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.104 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.362 17:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:07.362 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.362 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.362 [2024-11-26 17:23:37.384712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.362 [2024-11-26 17:23:37.384762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:07.362 [2024-11-26 17:23:37.387409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:07.362 [2024-11-26 17:23:37.387482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.362 [2024-11-26 17:23:37.387544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:07.362 [2024-11-26 17:23:37.387559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:07.362 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.362 { 00:27:07.363 "results": [ 00:27:07.363 { 00:27:07.363 "job": "raid_bdev1", 00:27:07.363 "core_mask": "0x1", 00:27:07.363 "workload": "randrw", 00:27:07.363 "percentage": 50, 00:27:07.363 "status": "finished", 00:27:07.363 "queue_depth": 1, 00:27:07.363 "io_size": 131072, 00:27:07.363 "runtime": 1.332498, 00:27:07.363 "iops": 15173.00588818895, 00:27:07.363 "mibps": 1896.6257360236189, 00:27:07.363 "io_failed": 1, 00:27:07.363 "io_timeout": 0, 00:27:07.363 "avg_latency_us": 90.90923273687261, 00:27:07.363 "min_latency_us": 27.347791164658634, 00:27:07.363 "max_latency_us": 1500.2216867469879 00:27:07.363 } 00:27:07.363 ], 00:27:07.363 "core_count": 1 00:27:07.363 } 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71245 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71245 ']' 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71245 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71245 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.363 killing process with pid 71245 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71245' 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71245 00:27:07.363 [2024-11-26 17:23:37.437791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:07.363 17:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71245 00:27:07.929 [2024-11-26 17:23:37.775699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lc22PGZtIJ 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:27:09.305 00:27:09.305 real 0m4.820s 00:27:09.305 user 0m5.546s 00:27:09.305 sys 0m0.739s 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.305 17:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.305 ************************************ 00:27:09.305 END TEST raid_write_error_test 00:27:09.305 ************************************ 00:27:09.305 17:23:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:09.305 17:23:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:27:09.305 17:23:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:09.305 17:23:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.305 17:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:09.305 ************************************ 00:27:09.305 START TEST raid_state_function_test 00:27:09.305 ************************************ 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:09.305 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71389 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:09.306 Process raid pid: 71389 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71389' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71389 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71389 ']' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.306 17:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.306 [2024-11-26 17:23:39.241470] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:09.306 [2024-11-26 17:23:39.241672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.564 [2024-11-26 17:23:39.447532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.564 [2024-11-26 17:23:39.596973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.823 [2024-11-26 17:23:39.828348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.823 [2024-11-26 17:23:39.828411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.082 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.082 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:27:10.082 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:10.082 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.082 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.082 [2024-11-26 17:23:40.153785] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:10.083 [2024-11-26 17:23:40.153859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:10.083 [2024-11-26 17:23:40.153872] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:10.083 [2024-11-26 17:23:40.153885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:10.083 [2024-11-26 17:23:40.153893] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:10.083 [2024-11-26 17:23:40.153906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:10.083 [2024-11-26 17:23:40.153914] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:10.083 [2024-11-26 17:23:40.153927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.083 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.341 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.341 "name": "Existed_Raid", 00:27:10.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.341 "strip_size_kb": 64, 00:27:10.341 "state": "configuring", 00:27:10.341 "raid_level": "concat", 00:27:10.341 "superblock": false, 00:27:10.341 "num_base_bdevs": 4, 00:27:10.341 "num_base_bdevs_discovered": 0, 00:27:10.341 "num_base_bdevs_operational": 4, 00:27:10.341 "base_bdevs_list": [ 00:27:10.341 { 00:27:10.341 "name": "BaseBdev1", 00:27:10.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.341 "is_configured": false, 00:27:10.341 "data_offset": 0, 00:27:10.341 "data_size": 0 00:27:10.341 }, 00:27:10.341 { 00:27:10.341 "name": "BaseBdev2", 00:27:10.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.341 "is_configured": false, 00:27:10.341 "data_offset": 0, 00:27:10.341 "data_size": 0 00:27:10.341 }, 00:27:10.341 { 00:27:10.341 "name": "BaseBdev3", 00:27:10.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.341 "is_configured": false, 00:27:10.341 "data_offset": 0, 00:27:10.341 "data_size": 0 00:27:10.341 }, 00:27:10.341 { 00:27:10.341 "name": "BaseBdev4", 00:27:10.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.341 "is_configured": false, 00:27:10.341 "data_offset": 0, 00:27:10.341 "data_size": 0 00:27:10.341 } 00:27:10.341 ] 00:27:10.341 }' 00:27:10.341 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.341 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 [2024-11-26 17:23:40.605754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:10.600 [2024-11-26 17:23:40.605811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 [2024-11-26 17:23:40.617745] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:10.600 [2024-11-26 17:23:40.617799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:10.600 [2024-11-26 17:23:40.617811] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:10.600 [2024-11-26 17:23:40.617824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:10.600 [2024-11-26 17:23:40.617832] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:10.600 [2024-11-26 17:23:40.617845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:10.600 [2024-11-26 17:23:40.617852] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:10.600 [2024-11-26 17:23:40.617864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 [2024-11-26 17:23:40.670213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:10.600 BaseBdev1 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.600 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 [ 00:27:10.600 { 00:27:10.600 "name": "BaseBdev1", 00:27:10.600 "aliases": [ 00:27:10.600 "f61b8ccc-609b-40c5-9e01-03f8da5e392f" 00:27:10.600 ], 00:27:10.600 "product_name": "Malloc disk", 00:27:10.600 "block_size": 512, 00:27:10.600 "num_blocks": 65536, 00:27:10.600 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:10.600 "assigned_rate_limits": { 00:27:10.600 "rw_ios_per_sec": 0, 00:27:10.600 "rw_mbytes_per_sec": 0, 00:27:10.600 "r_mbytes_per_sec": 0, 00:27:10.600 "w_mbytes_per_sec": 0 00:27:10.600 }, 00:27:10.600 "claimed": true, 00:27:10.601 "claim_type": "exclusive_write", 00:27:10.601 "zoned": false, 00:27:10.601 "supported_io_types": { 00:27:10.601 "read": true, 00:27:10.601 "write": true, 00:27:10.601 "unmap": true, 00:27:10.601 "flush": true, 00:27:10.601 "reset": true, 00:27:10.601 "nvme_admin": false, 00:27:10.601 "nvme_io": false, 00:27:10.601 "nvme_io_md": false, 00:27:10.601 "write_zeroes": true, 00:27:10.601 "zcopy": true, 00:27:10.601 "get_zone_info": false, 00:27:10.601 "zone_management": false, 00:27:10.601 "zone_append": false, 00:27:10.601 "compare": false, 00:27:10.601 "compare_and_write": false, 00:27:10.601 "abort": true, 00:27:10.601 "seek_hole": false, 00:27:10.601 "seek_data": false, 00:27:10.601 "copy": true, 00:27:10.601 "nvme_iov_md": false 00:27:10.601 }, 00:27:10.601 "memory_domains": [ 00:27:10.601 { 00:27:10.601 "dma_device_id": "system", 00:27:10.601 "dma_device_type": 1 00:27:10.601 }, 00:27:10.601 { 00:27:10.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.859 "dma_device_type": 2 00:27:10.859 } 00:27:10.859 ], 00:27:10.859 "driver_specific": {} 00:27:10.859 } 00:27:10.859 ] 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.859 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.860 "name": "Existed_Raid", 00:27:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.860 "strip_size_kb": 64, 00:27:10.860 "state": "configuring", 00:27:10.860 "raid_level": "concat", 00:27:10.860 "superblock": false, 00:27:10.860 "num_base_bdevs": 4, 00:27:10.860 "num_base_bdevs_discovered": 1, 00:27:10.860 "num_base_bdevs_operational": 4, 00:27:10.860 "base_bdevs_list": [ 00:27:10.860 { 00:27:10.860 "name": "BaseBdev1", 00:27:10.860 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:10.860 "is_configured": true, 00:27:10.860 "data_offset": 0, 00:27:10.860 "data_size": 65536 00:27:10.860 }, 00:27:10.860 { 00:27:10.860 "name": "BaseBdev2", 00:27:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.860 "is_configured": false, 00:27:10.860 "data_offset": 0, 00:27:10.860 "data_size": 0 00:27:10.860 }, 00:27:10.860 { 00:27:10.860 "name": "BaseBdev3", 00:27:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.860 "is_configured": false, 00:27:10.860 "data_offset": 0, 00:27:10.860 "data_size": 0 00:27:10.860 }, 00:27:10.860 { 00:27:10.860 "name": "BaseBdev4", 00:27:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.860 "is_configured": false, 00:27:10.860 "data_offset": 0, 00:27:10.860 "data_size": 0 00:27:10.860 } 00:27:10.860 ] 00:27:10.860 }' 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.860 17:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 [2024-11-26 17:23:41.173774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:11.174 [2024-11-26 17:23:41.173878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 [2024-11-26 17:23:41.185818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:11.174 [2024-11-26 17:23:41.188156] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:11.174 [2024-11-26 17:23:41.188213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:11.174 [2024-11-26 17:23:41.188226] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:11.174 [2024-11-26 17:23:41.188242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:11.174 [2024-11-26 17:23:41.188250] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:11.174 [2024-11-26 17:23:41.188262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.174 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.175 "name": "Existed_Raid", 00:27:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.175 "strip_size_kb": 64, 00:27:11.175 "state": "configuring", 00:27:11.175 "raid_level": "concat", 00:27:11.175 "superblock": false, 00:27:11.175 "num_base_bdevs": 4, 00:27:11.175 "num_base_bdevs_discovered": 1, 00:27:11.175 "num_base_bdevs_operational": 4, 00:27:11.175 "base_bdevs_list": [ 00:27:11.175 { 00:27:11.175 "name": "BaseBdev1", 00:27:11.175 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:11.175 "is_configured": true, 00:27:11.175 "data_offset": 0, 00:27:11.175 "data_size": 65536 00:27:11.175 }, 00:27:11.175 { 00:27:11.175 "name": "BaseBdev2", 00:27:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.175 "is_configured": false, 00:27:11.175 "data_offset": 0, 00:27:11.175 "data_size": 0 00:27:11.175 }, 00:27:11.175 { 00:27:11.175 "name": "BaseBdev3", 00:27:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.175 "is_configured": false, 00:27:11.175 "data_offset": 0, 00:27:11.175 "data_size": 0 00:27:11.175 }, 00:27:11.175 { 00:27:11.175 "name": "BaseBdev4", 00:27:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.175 "is_configured": false, 00:27:11.175 "data_offset": 0, 00:27:11.175 "data_size": 0 00:27:11.175 } 00:27:11.175 ] 00:27:11.175 }' 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.175 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.744 [2024-11-26 17:23:41.652331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:11.744 BaseBdev2 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.744 [ 00:27:11.744 { 00:27:11.744 "name": "BaseBdev2", 00:27:11.744 "aliases": [ 00:27:11.744 "59888886-00f0-4cf4-9c0c-7f005b59e082" 00:27:11.744 ], 00:27:11.744 "product_name": "Malloc disk", 00:27:11.744 "block_size": 512, 00:27:11.744 "num_blocks": 65536, 00:27:11.744 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:11.744 "assigned_rate_limits": { 00:27:11.744 "rw_ios_per_sec": 0, 00:27:11.744 "rw_mbytes_per_sec": 0, 00:27:11.744 "r_mbytes_per_sec": 0, 00:27:11.744 "w_mbytes_per_sec": 0 00:27:11.744 }, 00:27:11.744 "claimed": true, 00:27:11.744 "claim_type": "exclusive_write", 00:27:11.744 "zoned": false, 00:27:11.744 "supported_io_types": { 00:27:11.744 "read": true, 00:27:11.744 "write": true, 00:27:11.744 "unmap": true, 00:27:11.744 "flush": true, 00:27:11.744 "reset": true, 00:27:11.744 "nvme_admin": false, 00:27:11.744 "nvme_io": false, 00:27:11.744 "nvme_io_md": false, 00:27:11.744 "write_zeroes": true, 00:27:11.744 "zcopy": true, 00:27:11.744 "get_zone_info": false, 00:27:11.744 "zone_management": false, 00:27:11.744 "zone_append": false, 00:27:11.744 "compare": false, 00:27:11.744 "compare_and_write": false, 00:27:11.744 "abort": true, 00:27:11.744 "seek_hole": false, 00:27:11.744 "seek_data": false, 00:27:11.744 "copy": true, 00:27:11.744 "nvme_iov_md": false 00:27:11.744 }, 00:27:11.744 "memory_domains": [ 00:27:11.744 { 00:27:11.744 "dma_device_id": "system", 00:27:11.744 "dma_device_type": 1 00:27:11.744 }, 00:27:11.744 { 00:27:11.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:11.744 "dma_device_type": 2 00:27:11.744 } 00:27:11.744 ], 00:27:11.744 "driver_specific": {} 00:27:11.744 } 00:27:11.744 ] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.744 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.744 "name": "Existed_Raid", 00:27:11.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.745 "strip_size_kb": 64, 00:27:11.745 "state": "configuring", 00:27:11.745 "raid_level": "concat", 00:27:11.745 "superblock": false, 00:27:11.745 "num_base_bdevs": 4, 00:27:11.745 "num_base_bdevs_discovered": 2, 00:27:11.745 "num_base_bdevs_operational": 4, 00:27:11.745 "base_bdevs_list": [ 00:27:11.745 { 00:27:11.745 "name": "BaseBdev1", 00:27:11.745 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:11.745 "is_configured": true, 00:27:11.745 "data_offset": 0, 00:27:11.745 "data_size": 65536 00:27:11.745 }, 00:27:11.745 { 00:27:11.745 "name": "BaseBdev2", 00:27:11.745 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:11.745 "is_configured": true, 00:27:11.745 "data_offset": 0, 00:27:11.745 "data_size": 65536 00:27:11.745 }, 00:27:11.745 { 00:27:11.745 "name": "BaseBdev3", 00:27:11.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.745 "is_configured": false, 00:27:11.745 "data_offset": 0, 00:27:11.745 "data_size": 0 00:27:11.745 }, 00:27:11.745 { 00:27:11.745 "name": "BaseBdev4", 00:27:11.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.745 "is_configured": false, 00:27:11.745 "data_offset": 0, 00:27:11.745 "data_size": 0 00:27:11.745 } 00:27:11.745 ] 00:27:11.745 }' 00:27:11.745 17:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.745 17:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.004 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:12.004 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.004 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.265 [2024-11-26 17:23:42.150061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:12.265 BaseBdev3 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:12.265 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 [ 00:27:12.266 { 00:27:12.266 "name": "BaseBdev3", 00:27:12.266 "aliases": [ 00:27:12.266 "e109eb85-9630-47c3-861c-c45bc2e5dac7" 00:27:12.266 ], 00:27:12.266 "product_name": "Malloc disk", 00:27:12.266 "block_size": 512, 00:27:12.266 "num_blocks": 65536, 00:27:12.266 "uuid": "e109eb85-9630-47c3-861c-c45bc2e5dac7", 00:27:12.266 "assigned_rate_limits": { 00:27:12.266 "rw_ios_per_sec": 0, 00:27:12.266 "rw_mbytes_per_sec": 0, 00:27:12.266 "r_mbytes_per_sec": 0, 00:27:12.266 "w_mbytes_per_sec": 0 00:27:12.266 }, 00:27:12.266 "claimed": true, 00:27:12.266 "claim_type": "exclusive_write", 00:27:12.266 "zoned": false, 00:27:12.266 "supported_io_types": { 00:27:12.266 "read": true, 00:27:12.266 "write": true, 00:27:12.266 "unmap": true, 00:27:12.266 "flush": true, 00:27:12.266 "reset": true, 00:27:12.266 "nvme_admin": false, 00:27:12.266 "nvme_io": false, 00:27:12.266 "nvme_io_md": false, 00:27:12.266 "write_zeroes": true, 00:27:12.266 "zcopy": true, 00:27:12.266 "get_zone_info": false, 00:27:12.266 "zone_management": false, 00:27:12.266 "zone_append": false, 00:27:12.266 "compare": false, 00:27:12.266 "compare_and_write": false, 00:27:12.266 "abort": true, 00:27:12.266 "seek_hole": false, 00:27:12.266 "seek_data": false, 00:27:12.266 "copy": true, 00:27:12.266 "nvme_iov_md": false 00:27:12.266 }, 00:27:12.266 "memory_domains": [ 00:27:12.266 { 00:27:12.266 "dma_device_id": "system", 00:27:12.266 "dma_device_type": 1 00:27:12.266 }, 00:27:12.266 { 00:27:12.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.266 "dma_device_type": 2 00:27:12.266 } 00:27:12.266 ], 00:27:12.266 "driver_specific": {} 00:27:12.266 } 00:27:12.266 ] 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.266 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.266 "name": "Existed_Raid", 00:27:12.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.266 "strip_size_kb": 64, 00:27:12.266 "state": "configuring", 00:27:12.266 "raid_level": "concat", 00:27:12.266 "superblock": false, 00:27:12.266 "num_base_bdevs": 4, 00:27:12.266 "num_base_bdevs_discovered": 3, 00:27:12.266 "num_base_bdevs_operational": 4, 00:27:12.266 "base_bdevs_list": [ 00:27:12.266 { 00:27:12.266 "name": "BaseBdev1", 00:27:12.266 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:12.266 "is_configured": true, 00:27:12.266 "data_offset": 0, 00:27:12.266 "data_size": 65536 00:27:12.266 }, 00:27:12.266 { 00:27:12.266 "name": "BaseBdev2", 00:27:12.266 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:12.266 "is_configured": true, 00:27:12.266 "data_offset": 0, 00:27:12.266 "data_size": 65536 00:27:12.266 }, 00:27:12.266 { 00:27:12.266 "name": "BaseBdev3", 00:27:12.266 "uuid": "e109eb85-9630-47c3-861c-c45bc2e5dac7", 00:27:12.266 "is_configured": true, 00:27:12.266 "data_offset": 0, 00:27:12.266 "data_size": 65536 00:27:12.266 }, 00:27:12.266 { 00:27:12.266 "name": "BaseBdev4", 00:27:12.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.267 "is_configured": false, 00:27:12.267 "data_offset": 0, 00:27:12.267 "data_size": 0 00:27:12.267 } 00:27:12.267 ] 00:27:12.267 }' 00:27:12.267 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.267 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.525 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:12.525 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.525 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.786 [2024-11-26 17:23:42.650436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:12.786 [2024-11-26 17:23:42.650512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:12.786 [2024-11-26 17:23:42.650543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:12.786 [2024-11-26 17:23:42.650865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:12.786 [2024-11-26 17:23:42.651058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:12.786 [2024-11-26 17:23:42.651081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:12.786 [2024-11-26 17:23:42.651436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.786 BaseBdev4 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.786 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.787 [ 00:27:12.787 { 00:27:12.787 "name": "BaseBdev4", 00:27:12.787 "aliases": [ 00:27:12.787 "48fb6998-71de-448a-a541-1995c2b51e9f" 00:27:12.787 ], 00:27:12.787 "product_name": "Malloc disk", 00:27:12.787 "block_size": 512, 00:27:12.787 "num_blocks": 65536, 00:27:12.787 "uuid": "48fb6998-71de-448a-a541-1995c2b51e9f", 00:27:12.787 "assigned_rate_limits": { 00:27:12.787 "rw_ios_per_sec": 0, 00:27:12.787 "rw_mbytes_per_sec": 0, 00:27:12.787 "r_mbytes_per_sec": 0, 00:27:12.787 "w_mbytes_per_sec": 0 00:27:12.787 }, 00:27:12.787 "claimed": true, 00:27:12.787 "claim_type": "exclusive_write", 00:27:12.787 "zoned": false, 00:27:12.787 "supported_io_types": { 00:27:12.787 "read": true, 00:27:12.787 "write": true, 00:27:12.787 "unmap": true, 00:27:12.787 "flush": true, 00:27:12.787 "reset": true, 00:27:12.787 "nvme_admin": false, 00:27:12.787 "nvme_io": false, 00:27:12.787 "nvme_io_md": false, 00:27:12.787 "write_zeroes": true, 00:27:12.787 "zcopy": true, 00:27:12.787 "get_zone_info": false, 00:27:12.787 "zone_management": false, 00:27:12.787 "zone_append": false, 00:27:12.787 "compare": false, 00:27:12.787 "compare_and_write": false, 00:27:12.787 "abort": true, 00:27:12.787 "seek_hole": false, 00:27:12.787 "seek_data": false, 00:27:12.787 "copy": true, 00:27:12.787 "nvme_iov_md": false 00:27:12.787 }, 00:27:12.787 "memory_domains": [ 00:27:12.787 { 00:27:12.787 "dma_device_id": "system", 00:27:12.787 "dma_device_type": 1 00:27:12.787 }, 00:27:12.787 { 00:27:12.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.787 "dma_device_type": 2 00:27:12.787 } 00:27:12.787 ], 00:27:12.787 "driver_specific": {} 00:27:12.787 } 00:27:12.787 ] 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.787 "name": "Existed_Raid", 00:27:12.787 "uuid": "f59faf74-f386-4a3d-8164-d2b44fef1ab1", 00:27:12.787 "strip_size_kb": 64, 00:27:12.787 "state": "online", 00:27:12.787 "raid_level": "concat", 00:27:12.787 "superblock": false, 00:27:12.787 "num_base_bdevs": 4, 00:27:12.787 "num_base_bdevs_discovered": 4, 00:27:12.787 "num_base_bdevs_operational": 4, 00:27:12.787 "base_bdevs_list": [ 00:27:12.787 { 00:27:12.787 "name": "BaseBdev1", 00:27:12.787 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:12.787 "is_configured": true, 00:27:12.787 "data_offset": 0, 00:27:12.787 "data_size": 65536 00:27:12.787 }, 00:27:12.787 { 00:27:12.787 "name": "BaseBdev2", 00:27:12.787 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:12.787 "is_configured": true, 00:27:12.787 "data_offset": 0, 00:27:12.787 "data_size": 65536 00:27:12.787 }, 00:27:12.787 { 00:27:12.787 "name": "BaseBdev3", 00:27:12.787 "uuid": "e109eb85-9630-47c3-861c-c45bc2e5dac7", 00:27:12.787 "is_configured": true, 00:27:12.787 "data_offset": 0, 00:27:12.787 "data_size": 65536 00:27:12.787 }, 00:27:12.787 { 00:27:12.787 "name": "BaseBdev4", 00:27:12.787 "uuid": "48fb6998-71de-448a-a541-1995c2b51e9f", 00:27:12.787 "is_configured": true, 00:27:12.787 "data_offset": 0, 00:27:12.787 "data_size": 65536 00:27:12.787 } 00:27:12.787 ] 00:27:12.787 }' 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.787 17:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.046 [2024-11-26 17:23:43.122240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.046 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.305 "name": "Existed_Raid", 00:27:13.305 "aliases": [ 00:27:13.305 "f59faf74-f386-4a3d-8164-d2b44fef1ab1" 00:27:13.305 ], 00:27:13.305 "product_name": "Raid Volume", 00:27:13.305 "block_size": 512, 00:27:13.305 "num_blocks": 262144, 00:27:13.305 "uuid": "f59faf74-f386-4a3d-8164-d2b44fef1ab1", 00:27:13.305 "assigned_rate_limits": { 00:27:13.305 "rw_ios_per_sec": 0, 00:27:13.305 "rw_mbytes_per_sec": 0, 00:27:13.305 "r_mbytes_per_sec": 0, 00:27:13.305 "w_mbytes_per_sec": 0 00:27:13.305 }, 00:27:13.305 "claimed": false, 00:27:13.305 "zoned": false, 00:27:13.305 "supported_io_types": { 00:27:13.305 "read": true, 00:27:13.305 "write": true, 00:27:13.305 "unmap": true, 00:27:13.305 "flush": true, 00:27:13.305 "reset": true, 00:27:13.305 "nvme_admin": false, 00:27:13.305 "nvme_io": false, 00:27:13.305 "nvme_io_md": false, 00:27:13.305 "write_zeroes": true, 00:27:13.305 "zcopy": false, 00:27:13.305 "get_zone_info": false, 00:27:13.305 "zone_management": false, 00:27:13.305 "zone_append": false, 00:27:13.305 "compare": false, 00:27:13.305 "compare_and_write": false, 00:27:13.305 "abort": false, 00:27:13.305 "seek_hole": false, 00:27:13.305 "seek_data": false, 00:27:13.305 "copy": false, 00:27:13.305 "nvme_iov_md": false 00:27:13.305 }, 00:27:13.305 "memory_domains": [ 00:27:13.305 { 00:27:13.305 "dma_device_id": "system", 00:27:13.305 "dma_device_type": 1 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.305 "dma_device_type": 2 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "system", 00:27:13.305 "dma_device_type": 1 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.305 "dma_device_type": 2 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "system", 00:27:13.305 "dma_device_type": 1 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.305 "dma_device_type": 2 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "system", 00:27:13.305 "dma_device_type": 1 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.305 "dma_device_type": 2 00:27:13.305 } 00:27:13.305 ], 00:27:13.305 "driver_specific": { 00:27:13.305 "raid": { 00:27:13.305 "uuid": "f59faf74-f386-4a3d-8164-d2b44fef1ab1", 00:27:13.305 "strip_size_kb": 64, 00:27:13.305 "state": "online", 00:27:13.305 "raid_level": "concat", 00:27:13.305 "superblock": false, 00:27:13.305 "num_base_bdevs": 4, 00:27:13.305 "num_base_bdevs_discovered": 4, 00:27:13.305 "num_base_bdevs_operational": 4, 00:27:13.305 "base_bdevs_list": [ 00:27:13.305 { 00:27:13.305 "name": "BaseBdev1", 00:27:13.305 "uuid": "f61b8ccc-609b-40c5-9e01-03f8da5e392f", 00:27:13.305 "is_configured": true, 00:27:13.305 "data_offset": 0, 00:27:13.305 "data_size": 65536 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "name": "BaseBdev2", 00:27:13.305 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:13.305 "is_configured": true, 00:27:13.305 "data_offset": 0, 00:27:13.305 "data_size": 65536 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "name": "BaseBdev3", 00:27:13.305 "uuid": "e109eb85-9630-47c3-861c-c45bc2e5dac7", 00:27:13.305 "is_configured": true, 00:27:13.305 "data_offset": 0, 00:27:13.305 "data_size": 65536 00:27:13.305 }, 00:27:13.305 { 00:27:13.305 "name": "BaseBdev4", 00:27:13.305 "uuid": "48fb6998-71de-448a-a541-1995c2b51e9f", 00:27:13.305 "is_configured": true, 00:27:13.305 "data_offset": 0, 00:27:13.305 "data_size": 65536 00:27:13.305 } 00:27:13.305 ] 00:27:13.305 } 00:27:13.305 } 00:27:13.305 }' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:13.305 BaseBdev2 00:27:13.305 BaseBdev3 00:27:13.305 BaseBdev4' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:13.305 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:13.306 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.564 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:13.564 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.565 [2024-11-26 17:23:43.445842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:13.565 [2024-11-26 17:23:43.445889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:13.565 [2024-11-26 17:23:43.445955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.565 "name": "Existed_Raid", 00:27:13.565 "uuid": "f59faf74-f386-4a3d-8164-d2b44fef1ab1", 00:27:13.565 "strip_size_kb": 64, 00:27:13.565 "state": "offline", 00:27:13.565 "raid_level": "concat", 00:27:13.565 "superblock": false, 00:27:13.565 "num_base_bdevs": 4, 00:27:13.565 "num_base_bdevs_discovered": 3, 00:27:13.565 "num_base_bdevs_operational": 3, 00:27:13.565 "base_bdevs_list": [ 00:27:13.565 { 00:27:13.565 "name": null, 00:27:13.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.565 "is_configured": false, 00:27:13.565 "data_offset": 0, 00:27:13.565 "data_size": 65536 00:27:13.565 }, 00:27:13.565 { 00:27:13.565 "name": "BaseBdev2", 00:27:13.565 "uuid": "59888886-00f0-4cf4-9c0c-7f005b59e082", 00:27:13.565 "is_configured": true, 00:27:13.565 "data_offset": 0, 00:27:13.565 "data_size": 65536 00:27:13.565 }, 00:27:13.565 { 00:27:13.565 "name": "BaseBdev3", 00:27:13.565 "uuid": "e109eb85-9630-47c3-861c-c45bc2e5dac7", 00:27:13.565 "is_configured": true, 00:27:13.565 "data_offset": 0, 00:27:13.565 "data_size": 65536 00:27:13.565 }, 00:27:13.565 { 00:27:13.565 "name": "BaseBdev4", 00:27:13.565 "uuid": "48fb6998-71de-448a-a541-1995c2b51e9f", 00:27:13.565 "is_configured": true, 00:27:13.565 "data_offset": 0, 00:27:13.565 "data_size": 65536 00:27:13.565 } 00:27:13.565 ] 00:27:13.565 }' 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.565 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.133 17:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.133 [2024-11-26 17:23:43.997793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.133 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.133 [2024-11-26 17:23:44.156756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.420 [2024-11-26 17:23:44.310306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:14.420 [2024-11-26 17:23:44.310374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.420 BaseBdev2 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.420 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 [ 00:27:14.710 { 00:27:14.710 "name": "BaseBdev2", 00:27:14.710 "aliases": [ 00:27:14.710 "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a" 00:27:14.710 ], 00:27:14.710 "product_name": "Malloc disk", 00:27:14.710 "block_size": 512, 00:27:14.710 "num_blocks": 65536, 00:27:14.710 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:14.710 "assigned_rate_limits": { 00:27:14.710 "rw_ios_per_sec": 0, 00:27:14.710 "rw_mbytes_per_sec": 0, 00:27:14.710 "r_mbytes_per_sec": 0, 00:27:14.710 "w_mbytes_per_sec": 0 00:27:14.710 }, 00:27:14.710 "claimed": false, 00:27:14.710 "zoned": false, 00:27:14.710 "supported_io_types": { 00:27:14.710 "read": true, 00:27:14.710 "write": true, 00:27:14.710 "unmap": true, 00:27:14.710 "flush": true, 00:27:14.710 "reset": true, 00:27:14.710 "nvme_admin": false, 00:27:14.710 "nvme_io": false, 00:27:14.710 "nvme_io_md": false, 00:27:14.710 "write_zeroes": true, 00:27:14.710 "zcopy": true, 00:27:14.710 "get_zone_info": false, 00:27:14.710 "zone_management": false, 00:27:14.710 "zone_append": false, 00:27:14.710 "compare": false, 00:27:14.710 "compare_and_write": false, 00:27:14.710 "abort": true, 00:27:14.710 "seek_hole": false, 00:27:14.710 "seek_data": false, 00:27:14.710 "copy": true, 00:27:14.710 "nvme_iov_md": false 00:27:14.710 }, 00:27:14.710 "memory_domains": [ 00:27:14.710 { 00:27:14.710 "dma_device_id": "system", 00:27:14.710 "dma_device_type": 1 00:27:14.710 }, 00:27:14.710 { 00:27:14.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.710 "dma_device_type": 2 00:27:14.710 } 00:27:14.710 ], 00:27:14.710 "driver_specific": {} 00:27:14.710 } 00:27:14.710 ] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 BaseBdev3 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 [ 00:27:14.710 { 00:27:14.710 "name": "BaseBdev3", 00:27:14.710 "aliases": [ 00:27:14.710 "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4" 00:27:14.710 ], 00:27:14.710 "product_name": "Malloc disk", 00:27:14.710 "block_size": 512, 00:27:14.710 "num_blocks": 65536, 00:27:14.710 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:14.710 "assigned_rate_limits": { 00:27:14.710 "rw_ios_per_sec": 0, 00:27:14.710 "rw_mbytes_per_sec": 0, 00:27:14.710 "r_mbytes_per_sec": 0, 00:27:14.710 "w_mbytes_per_sec": 0 00:27:14.710 }, 00:27:14.710 "claimed": false, 00:27:14.710 "zoned": false, 00:27:14.710 "supported_io_types": { 00:27:14.710 "read": true, 00:27:14.710 "write": true, 00:27:14.710 "unmap": true, 00:27:14.710 "flush": true, 00:27:14.710 "reset": true, 00:27:14.710 "nvme_admin": false, 00:27:14.710 "nvme_io": false, 00:27:14.710 "nvme_io_md": false, 00:27:14.710 "write_zeroes": true, 00:27:14.710 "zcopy": true, 00:27:14.710 "get_zone_info": false, 00:27:14.710 "zone_management": false, 00:27:14.710 "zone_append": false, 00:27:14.710 "compare": false, 00:27:14.710 "compare_and_write": false, 00:27:14.710 "abort": true, 00:27:14.710 "seek_hole": false, 00:27:14.710 "seek_data": false, 00:27:14.710 "copy": true, 00:27:14.710 "nvme_iov_md": false 00:27:14.710 }, 00:27:14.710 "memory_domains": [ 00:27:14.710 { 00:27:14.710 "dma_device_id": "system", 00:27:14.710 "dma_device_type": 1 00:27:14.710 }, 00:27:14.710 { 00:27:14.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.710 "dma_device_type": 2 00:27:14.710 } 00:27:14.710 ], 00:27:14.710 "driver_specific": {} 00:27:14.710 } 00:27:14.710 ] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 BaseBdev4 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.710 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 [ 00:27:14.710 { 00:27:14.710 "name": "BaseBdev4", 00:27:14.710 "aliases": [ 00:27:14.710 "6b48a02d-f571-40b8-a338-7a10300a75e0" 00:27:14.710 ], 00:27:14.710 "product_name": "Malloc disk", 00:27:14.710 "block_size": 512, 00:27:14.710 "num_blocks": 65536, 00:27:14.710 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:14.710 "assigned_rate_limits": { 00:27:14.710 "rw_ios_per_sec": 0, 00:27:14.710 "rw_mbytes_per_sec": 0, 00:27:14.710 "r_mbytes_per_sec": 0, 00:27:14.710 "w_mbytes_per_sec": 0 00:27:14.710 }, 00:27:14.710 "claimed": false, 00:27:14.710 "zoned": false, 00:27:14.710 "supported_io_types": { 00:27:14.710 "read": true, 00:27:14.710 "write": true, 00:27:14.710 "unmap": true, 00:27:14.710 "flush": true, 00:27:14.710 "reset": true, 00:27:14.710 "nvme_admin": false, 00:27:14.710 "nvme_io": false, 00:27:14.710 "nvme_io_md": false, 00:27:14.710 "write_zeroes": true, 00:27:14.710 "zcopy": true, 00:27:14.710 "get_zone_info": false, 00:27:14.710 "zone_management": false, 00:27:14.710 "zone_append": false, 00:27:14.710 "compare": false, 00:27:14.710 "compare_and_write": false, 00:27:14.710 "abort": true, 00:27:14.710 "seek_hole": false, 00:27:14.710 "seek_data": false, 00:27:14.710 "copy": true, 00:27:14.710 "nvme_iov_md": false 00:27:14.710 }, 00:27:14.710 "memory_domains": [ 00:27:14.710 { 00:27:14.710 "dma_device_id": "system", 00:27:14.710 "dma_device_type": 1 00:27:14.710 }, 00:27:14.710 { 00:27:14.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.711 "dma_device_type": 2 00:27:14.711 } 00:27:14.711 ], 00:27:14.711 "driver_specific": {} 00:27:14.711 } 00:27:14.711 ] 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.711 [2024-11-26 17:23:44.745052] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.711 [2024-11-26 17:23:44.745135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.711 [2024-11-26 17:23:44.745170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:14.711 [2024-11-26 17:23:44.747722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:14.711 [2024-11-26 17:23:44.747789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.711 "name": "Existed_Raid", 00:27:14.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.711 "strip_size_kb": 64, 00:27:14.711 "state": "configuring", 00:27:14.711 "raid_level": "concat", 00:27:14.711 "superblock": false, 00:27:14.711 "num_base_bdevs": 4, 00:27:14.711 "num_base_bdevs_discovered": 3, 00:27:14.711 "num_base_bdevs_operational": 4, 00:27:14.711 "base_bdevs_list": [ 00:27:14.711 { 00:27:14.711 "name": "BaseBdev1", 00:27:14.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.711 "is_configured": false, 00:27:14.711 "data_offset": 0, 00:27:14.711 "data_size": 0 00:27:14.711 }, 00:27:14.711 { 00:27:14.711 "name": "BaseBdev2", 00:27:14.711 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:14.711 "is_configured": true, 00:27:14.711 "data_offset": 0, 00:27:14.711 "data_size": 65536 00:27:14.711 }, 00:27:14.711 { 00:27:14.711 "name": "BaseBdev3", 00:27:14.711 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:14.711 "is_configured": true, 00:27:14.711 "data_offset": 0, 00:27:14.711 "data_size": 65536 00:27:14.711 }, 00:27:14.711 { 00:27:14.711 "name": "BaseBdev4", 00:27:14.711 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:14.711 "is_configured": true, 00:27:14.711 "data_offset": 0, 00:27:14.711 "data_size": 65536 00:27:14.711 } 00:27:14.711 ] 00:27:14.711 }' 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.711 17:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.286 [2024-11-26 17:23:45.244378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.286 "name": "Existed_Raid", 00:27:15.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.286 "strip_size_kb": 64, 00:27:15.286 "state": "configuring", 00:27:15.286 "raid_level": "concat", 00:27:15.286 "superblock": false, 00:27:15.286 "num_base_bdevs": 4, 00:27:15.286 "num_base_bdevs_discovered": 2, 00:27:15.286 "num_base_bdevs_operational": 4, 00:27:15.286 "base_bdevs_list": [ 00:27:15.286 { 00:27:15.286 "name": "BaseBdev1", 00:27:15.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.286 "is_configured": false, 00:27:15.286 "data_offset": 0, 00:27:15.286 "data_size": 0 00:27:15.286 }, 00:27:15.286 { 00:27:15.286 "name": null, 00:27:15.286 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:15.286 "is_configured": false, 00:27:15.286 "data_offset": 0, 00:27:15.286 "data_size": 65536 00:27:15.286 }, 00:27:15.286 { 00:27:15.286 "name": "BaseBdev3", 00:27:15.286 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:15.286 "is_configured": true, 00:27:15.286 "data_offset": 0, 00:27:15.286 "data_size": 65536 00:27:15.286 }, 00:27:15.286 { 00:27:15.286 "name": "BaseBdev4", 00:27:15.286 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:15.286 "is_configured": true, 00:27:15.286 "data_offset": 0, 00:27:15.286 "data_size": 65536 00:27:15.286 } 00:27:15.286 ] 00:27:15.286 }' 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.286 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.545 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.545 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.545 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.545 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.805 [2024-11-26 17:23:45.730851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:15.805 BaseBdev1 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.805 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.805 [ 00:27:15.805 { 00:27:15.805 "name": "BaseBdev1", 00:27:15.805 "aliases": [ 00:27:15.805 "be1f426c-461e-4a18-8d2a-ed35e319cd0f" 00:27:15.805 ], 00:27:15.805 "product_name": "Malloc disk", 00:27:15.805 "block_size": 512, 00:27:15.805 "num_blocks": 65536, 00:27:15.805 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:15.805 "assigned_rate_limits": { 00:27:15.805 "rw_ios_per_sec": 0, 00:27:15.805 "rw_mbytes_per_sec": 0, 00:27:15.805 "r_mbytes_per_sec": 0, 00:27:15.805 "w_mbytes_per_sec": 0 00:27:15.805 }, 00:27:15.805 "claimed": true, 00:27:15.805 "claim_type": "exclusive_write", 00:27:15.805 "zoned": false, 00:27:15.805 "supported_io_types": { 00:27:15.805 "read": true, 00:27:15.805 "write": true, 00:27:15.805 "unmap": true, 00:27:15.805 "flush": true, 00:27:15.805 "reset": true, 00:27:15.805 "nvme_admin": false, 00:27:15.805 "nvme_io": false, 00:27:15.805 "nvme_io_md": false, 00:27:15.805 "write_zeroes": true, 00:27:15.805 "zcopy": true, 00:27:15.805 "get_zone_info": false, 00:27:15.805 "zone_management": false, 00:27:15.805 "zone_append": false, 00:27:15.805 "compare": false, 00:27:15.805 "compare_and_write": false, 00:27:15.805 "abort": true, 00:27:15.805 "seek_hole": false, 00:27:15.805 "seek_data": false, 00:27:15.805 "copy": true, 00:27:15.805 "nvme_iov_md": false 00:27:15.805 }, 00:27:15.805 "memory_domains": [ 00:27:15.805 { 00:27:15.805 "dma_device_id": "system", 00:27:15.805 "dma_device_type": 1 00:27:15.805 }, 00:27:15.805 { 00:27:15.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.805 "dma_device_type": 2 00:27:15.805 } 00:27:15.805 ], 00:27:15.805 "driver_specific": {} 00:27:15.805 } 00:27:15.805 ] 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.806 "name": "Existed_Raid", 00:27:15.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.806 "strip_size_kb": 64, 00:27:15.806 "state": "configuring", 00:27:15.806 "raid_level": "concat", 00:27:15.806 "superblock": false, 00:27:15.806 "num_base_bdevs": 4, 00:27:15.806 "num_base_bdevs_discovered": 3, 00:27:15.806 "num_base_bdevs_operational": 4, 00:27:15.806 "base_bdevs_list": [ 00:27:15.806 { 00:27:15.806 "name": "BaseBdev1", 00:27:15.806 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:15.806 "is_configured": true, 00:27:15.806 "data_offset": 0, 00:27:15.806 "data_size": 65536 00:27:15.806 }, 00:27:15.806 { 00:27:15.806 "name": null, 00:27:15.806 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:15.806 "is_configured": false, 00:27:15.806 "data_offset": 0, 00:27:15.806 "data_size": 65536 00:27:15.806 }, 00:27:15.806 { 00:27:15.806 "name": "BaseBdev3", 00:27:15.806 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:15.806 "is_configured": true, 00:27:15.806 "data_offset": 0, 00:27:15.806 "data_size": 65536 00:27:15.806 }, 00:27:15.806 { 00:27:15.806 "name": "BaseBdev4", 00:27:15.806 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:15.806 "is_configured": true, 00:27:15.806 "data_offset": 0, 00:27:15.806 "data_size": 65536 00:27:15.806 } 00:27:15.806 ] 00:27:15.806 }' 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.806 17:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.374 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 [2024-11-26 17:23:46.230358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.375 "name": "Existed_Raid", 00:27:16.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.375 "strip_size_kb": 64, 00:27:16.375 "state": "configuring", 00:27:16.375 "raid_level": "concat", 00:27:16.375 "superblock": false, 00:27:16.375 "num_base_bdevs": 4, 00:27:16.375 "num_base_bdevs_discovered": 2, 00:27:16.375 "num_base_bdevs_operational": 4, 00:27:16.375 "base_bdevs_list": [ 00:27:16.375 { 00:27:16.375 "name": "BaseBdev1", 00:27:16.375 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:16.375 "is_configured": true, 00:27:16.375 "data_offset": 0, 00:27:16.375 "data_size": 65536 00:27:16.375 }, 00:27:16.375 { 00:27:16.375 "name": null, 00:27:16.375 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:16.375 "is_configured": false, 00:27:16.375 "data_offset": 0, 00:27:16.375 "data_size": 65536 00:27:16.375 }, 00:27:16.375 { 00:27:16.375 "name": null, 00:27:16.375 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:16.375 "is_configured": false, 00:27:16.375 "data_offset": 0, 00:27:16.375 "data_size": 65536 00:27:16.375 }, 00:27:16.375 { 00:27:16.375 "name": "BaseBdev4", 00:27:16.375 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:16.375 "is_configured": true, 00:27:16.375 "data_offset": 0, 00:27:16.375 "data_size": 65536 00:27:16.375 } 00:27:16.375 ] 00:27:16.375 }' 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.375 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.635 [2024-11-26 17:23:46.709780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.635 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.894 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.894 "name": "Existed_Raid", 00:27:16.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.894 "strip_size_kb": 64, 00:27:16.894 "state": "configuring", 00:27:16.894 "raid_level": "concat", 00:27:16.894 "superblock": false, 00:27:16.894 "num_base_bdevs": 4, 00:27:16.894 "num_base_bdevs_discovered": 3, 00:27:16.894 "num_base_bdevs_operational": 4, 00:27:16.894 "base_bdevs_list": [ 00:27:16.894 { 00:27:16.894 "name": "BaseBdev1", 00:27:16.894 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:16.894 "is_configured": true, 00:27:16.894 "data_offset": 0, 00:27:16.894 "data_size": 65536 00:27:16.894 }, 00:27:16.894 { 00:27:16.894 "name": null, 00:27:16.894 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:16.894 "is_configured": false, 00:27:16.894 "data_offset": 0, 00:27:16.894 "data_size": 65536 00:27:16.894 }, 00:27:16.894 { 00:27:16.894 "name": "BaseBdev3", 00:27:16.894 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:16.894 "is_configured": true, 00:27:16.894 "data_offset": 0, 00:27:16.894 "data_size": 65536 00:27:16.894 }, 00:27:16.894 { 00:27:16.894 "name": "BaseBdev4", 00:27:16.894 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:16.894 "is_configured": true, 00:27:16.894 "data_offset": 0, 00:27:16.894 "data_size": 65536 00:27:16.894 } 00:27:16.894 ] 00:27:16.894 }' 00:27:16.894 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.894 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 [2024-11-26 17:23:47.149768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.153 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.411 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.411 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.411 "name": "Existed_Raid", 00:27:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.411 "strip_size_kb": 64, 00:27:17.411 "state": "configuring", 00:27:17.411 "raid_level": "concat", 00:27:17.411 "superblock": false, 00:27:17.411 "num_base_bdevs": 4, 00:27:17.411 "num_base_bdevs_discovered": 2, 00:27:17.411 "num_base_bdevs_operational": 4, 00:27:17.411 "base_bdevs_list": [ 00:27:17.411 { 00:27:17.411 "name": null, 00:27:17.411 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:17.411 "is_configured": false, 00:27:17.411 "data_offset": 0, 00:27:17.411 "data_size": 65536 00:27:17.411 }, 00:27:17.411 { 00:27:17.411 "name": null, 00:27:17.411 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:17.411 "is_configured": false, 00:27:17.411 "data_offset": 0, 00:27:17.411 "data_size": 65536 00:27:17.411 }, 00:27:17.411 { 00:27:17.411 "name": "BaseBdev3", 00:27:17.411 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:17.411 "is_configured": true, 00:27:17.411 "data_offset": 0, 00:27:17.411 "data_size": 65536 00:27:17.411 }, 00:27:17.411 { 00:27:17.411 "name": "BaseBdev4", 00:27:17.411 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:17.411 "is_configured": true, 00:27:17.411 "data_offset": 0, 00:27:17.411 "data_size": 65536 00:27:17.411 } 00:27:17.411 ] 00:27:17.411 }' 00:27:17.411 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.411 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.671 [2024-11-26 17:23:47.651924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.671 "name": "Existed_Raid", 00:27:17.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.671 "strip_size_kb": 64, 00:27:17.671 "state": "configuring", 00:27:17.671 "raid_level": "concat", 00:27:17.671 "superblock": false, 00:27:17.671 "num_base_bdevs": 4, 00:27:17.671 "num_base_bdevs_discovered": 3, 00:27:17.671 "num_base_bdevs_operational": 4, 00:27:17.671 "base_bdevs_list": [ 00:27:17.671 { 00:27:17.671 "name": null, 00:27:17.671 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:17.671 "is_configured": false, 00:27:17.671 "data_offset": 0, 00:27:17.671 "data_size": 65536 00:27:17.671 }, 00:27:17.671 { 00:27:17.671 "name": "BaseBdev2", 00:27:17.671 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:17.671 "is_configured": true, 00:27:17.671 "data_offset": 0, 00:27:17.671 "data_size": 65536 00:27:17.671 }, 00:27:17.671 { 00:27:17.671 "name": "BaseBdev3", 00:27:17.671 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:17.671 "is_configured": true, 00:27:17.671 "data_offset": 0, 00:27:17.671 "data_size": 65536 00:27:17.671 }, 00:27:17.671 { 00:27:17.671 "name": "BaseBdev4", 00:27:17.671 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:17.671 "is_configured": true, 00:27:17.671 "data_offset": 0, 00:27:17.671 "data_size": 65536 00:27:17.671 } 00:27:17.671 ] 00:27:17.671 }' 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.671 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be1f426c-461e-4a18-8d2a-ed35e319cd0f 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 [2024-11-26 17:23:48.199977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:18.329 [2024-11-26 17:23:48.200042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:18.329 [2024-11-26 17:23:48.200052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:18.329 [2024-11-26 17:23:48.200365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:18.329 [2024-11-26 17:23:48.200534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:18.329 [2024-11-26 17:23:48.200551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:18.329 [2024-11-26 17:23:48.200811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.329 NewBaseBdev 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.329 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 [ 00:27:18.329 { 00:27:18.329 "name": "NewBaseBdev", 00:27:18.329 "aliases": [ 00:27:18.329 "be1f426c-461e-4a18-8d2a-ed35e319cd0f" 00:27:18.329 ], 00:27:18.329 "product_name": "Malloc disk", 00:27:18.329 "block_size": 512, 00:27:18.329 "num_blocks": 65536, 00:27:18.329 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:18.329 "assigned_rate_limits": { 00:27:18.329 "rw_ios_per_sec": 0, 00:27:18.329 "rw_mbytes_per_sec": 0, 00:27:18.329 "r_mbytes_per_sec": 0, 00:27:18.329 "w_mbytes_per_sec": 0 00:27:18.329 }, 00:27:18.329 "claimed": true, 00:27:18.329 "claim_type": "exclusive_write", 00:27:18.329 "zoned": false, 00:27:18.329 "supported_io_types": { 00:27:18.329 "read": true, 00:27:18.329 "write": true, 00:27:18.329 "unmap": true, 00:27:18.329 "flush": true, 00:27:18.329 "reset": true, 00:27:18.329 "nvme_admin": false, 00:27:18.329 "nvme_io": false, 00:27:18.329 "nvme_io_md": false, 00:27:18.329 "write_zeroes": true, 00:27:18.329 "zcopy": true, 00:27:18.329 "get_zone_info": false, 00:27:18.329 "zone_management": false, 00:27:18.329 "zone_append": false, 00:27:18.329 "compare": false, 00:27:18.330 "compare_and_write": false, 00:27:18.330 "abort": true, 00:27:18.330 "seek_hole": false, 00:27:18.330 "seek_data": false, 00:27:18.330 "copy": true, 00:27:18.330 "nvme_iov_md": false 00:27:18.330 }, 00:27:18.330 "memory_domains": [ 00:27:18.330 { 00:27:18.330 "dma_device_id": "system", 00:27:18.330 "dma_device_type": 1 00:27:18.330 }, 00:27:18.330 { 00:27:18.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.330 "dma_device_type": 2 00:27:18.330 } 00:27:18.330 ], 00:27:18.330 "driver_specific": {} 00:27:18.330 } 00:27:18.330 ] 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.330 "name": "Existed_Raid", 00:27:18.330 "uuid": "0a98220b-e54d-422e-b9ae-8de2416ed379", 00:27:18.330 "strip_size_kb": 64, 00:27:18.330 "state": "online", 00:27:18.330 "raid_level": "concat", 00:27:18.330 "superblock": false, 00:27:18.330 "num_base_bdevs": 4, 00:27:18.330 "num_base_bdevs_discovered": 4, 00:27:18.330 "num_base_bdevs_operational": 4, 00:27:18.330 "base_bdevs_list": [ 00:27:18.330 { 00:27:18.330 "name": "NewBaseBdev", 00:27:18.330 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:18.330 "is_configured": true, 00:27:18.330 "data_offset": 0, 00:27:18.330 "data_size": 65536 00:27:18.330 }, 00:27:18.330 { 00:27:18.330 "name": "BaseBdev2", 00:27:18.330 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:18.330 "is_configured": true, 00:27:18.330 "data_offset": 0, 00:27:18.330 "data_size": 65536 00:27:18.330 }, 00:27:18.330 { 00:27:18.330 "name": "BaseBdev3", 00:27:18.330 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:18.330 "is_configured": true, 00:27:18.330 "data_offset": 0, 00:27:18.330 "data_size": 65536 00:27:18.330 }, 00:27:18.330 { 00:27:18.330 "name": "BaseBdev4", 00:27:18.330 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:18.330 "is_configured": true, 00:27:18.330 "data_offset": 0, 00:27:18.330 "data_size": 65536 00:27:18.330 } 00:27:18.330 ] 00:27:18.330 }' 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.330 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:18.589 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.589 [2024-11-26 17:23:48.683814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.849 "name": "Existed_Raid", 00:27:18.849 "aliases": [ 00:27:18.849 "0a98220b-e54d-422e-b9ae-8de2416ed379" 00:27:18.849 ], 00:27:18.849 "product_name": "Raid Volume", 00:27:18.849 "block_size": 512, 00:27:18.849 "num_blocks": 262144, 00:27:18.849 "uuid": "0a98220b-e54d-422e-b9ae-8de2416ed379", 00:27:18.849 "assigned_rate_limits": { 00:27:18.849 "rw_ios_per_sec": 0, 00:27:18.849 "rw_mbytes_per_sec": 0, 00:27:18.849 "r_mbytes_per_sec": 0, 00:27:18.849 "w_mbytes_per_sec": 0 00:27:18.849 }, 00:27:18.849 "claimed": false, 00:27:18.849 "zoned": false, 00:27:18.849 "supported_io_types": { 00:27:18.849 "read": true, 00:27:18.849 "write": true, 00:27:18.849 "unmap": true, 00:27:18.849 "flush": true, 00:27:18.849 "reset": true, 00:27:18.849 "nvme_admin": false, 00:27:18.849 "nvme_io": false, 00:27:18.849 "nvme_io_md": false, 00:27:18.849 "write_zeroes": true, 00:27:18.849 "zcopy": false, 00:27:18.849 "get_zone_info": false, 00:27:18.849 "zone_management": false, 00:27:18.849 "zone_append": false, 00:27:18.849 "compare": false, 00:27:18.849 "compare_and_write": false, 00:27:18.849 "abort": false, 00:27:18.849 "seek_hole": false, 00:27:18.849 "seek_data": false, 00:27:18.849 "copy": false, 00:27:18.849 "nvme_iov_md": false 00:27:18.849 }, 00:27:18.849 "memory_domains": [ 00:27:18.849 { 00:27:18.849 "dma_device_id": "system", 00:27:18.849 "dma_device_type": 1 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.849 "dma_device_type": 2 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "system", 00:27:18.849 "dma_device_type": 1 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.849 "dma_device_type": 2 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "system", 00:27:18.849 "dma_device_type": 1 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.849 "dma_device_type": 2 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "system", 00:27:18.849 "dma_device_type": 1 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.849 "dma_device_type": 2 00:27:18.849 } 00:27:18.849 ], 00:27:18.849 "driver_specific": { 00:27:18.849 "raid": { 00:27:18.849 "uuid": "0a98220b-e54d-422e-b9ae-8de2416ed379", 00:27:18.849 "strip_size_kb": 64, 00:27:18.849 "state": "online", 00:27:18.849 "raid_level": "concat", 00:27:18.849 "superblock": false, 00:27:18.849 "num_base_bdevs": 4, 00:27:18.849 "num_base_bdevs_discovered": 4, 00:27:18.849 "num_base_bdevs_operational": 4, 00:27:18.849 "base_bdevs_list": [ 00:27:18.849 { 00:27:18.849 "name": "NewBaseBdev", 00:27:18.849 "uuid": "be1f426c-461e-4a18-8d2a-ed35e319cd0f", 00:27:18.849 "is_configured": true, 00:27:18.849 "data_offset": 0, 00:27:18.849 "data_size": 65536 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "name": "BaseBdev2", 00:27:18.849 "uuid": "05f8112e-3360-42ed-bbe2-34f7e1d0ad5a", 00:27:18.849 "is_configured": true, 00:27:18.849 "data_offset": 0, 00:27:18.849 "data_size": 65536 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "name": "BaseBdev3", 00:27:18.849 "uuid": "ea8136da-f11e-4590-b3e7-9bb6bdbdebe4", 00:27:18.849 "is_configured": true, 00:27:18.849 "data_offset": 0, 00:27:18.849 "data_size": 65536 00:27:18.849 }, 00:27:18.849 { 00:27:18.849 "name": "BaseBdev4", 00:27:18.849 "uuid": "6b48a02d-f571-40b8-a338-7a10300a75e0", 00:27:18.849 "is_configured": true, 00:27:18.849 "data_offset": 0, 00:27:18.849 "data_size": 65536 00:27:18.849 } 00:27:18.849 ] 00:27:18.849 } 00:27:18.849 } 00:27:18.849 }' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:18.849 BaseBdev2 00:27:18.849 BaseBdev3 00:27:18.849 BaseBdev4' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.849 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.850 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.109 [2024-11-26 17:23:48.987013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:19.109 [2024-11-26 17:23:48.987059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.109 [2024-11-26 17:23:48.987172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.109 [2024-11-26 17:23:48.987252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.109 [2024-11-26 17:23:48.987265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71389 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71389 ']' 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71389 00:27:19.109 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71389 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.109 killing process with pid 71389 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71389' 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71389 00:27:19.109 [2024-11-26 17:23:49.046622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.109 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71389 00:27:19.368 [2024-11-26 17:23:49.464056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:20.746 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:20.746 00:27:20.746 real 0m11.556s 00:27:20.746 user 0m18.147s 00:27:20.746 sys 0m2.400s 00:27:20.746 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.746 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.746 ************************************ 00:27:20.746 END TEST raid_state_function_test 00:27:20.746 ************************************ 00:27:20.746 17:23:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:27:20.746 17:23:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:20.746 17:23:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.746 17:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:20.746 ************************************ 00:27:20.746 START TEST raid_state_function_test_sb 00:27:20.746 ************************************ 00:27:20.746 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72060 00:27:20.747 Process raid pid: 72060 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72060' 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72060 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72060 ']' 00:27:20.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 17:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:21.006 [2024-11-26 17:23:50.858417] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:21.006 [2024-11-26 17:23:50.858608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.006 [2024-11-26 17:23:51.045149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.265 [2024-11-26 17:23:51.189195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.523 [2024-11-26 17:23:51.418608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.523 [2024-11-26 17:23:51.418658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.782 [2024-11-26 17:23:51.712755] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:21.782 [2024-11-26 17:23:51.712823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:21.782 [2024-11-26 17:23:51.712836] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:21.782 [2024-11-26 17:23:51.712850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:21.782 [2024-11-26 17:23:51.712866] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:21.782 [2024-11-26 17:23:51.712878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:21.782 [2024-11-26 17:23:51.712886] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:21.782 [2024-11-26 17:23:51.712899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.782 "name": "Existed_Raid", 00:27:21.782 "uuid": "d07b43f9-9846-40d7-a555-22d7c7a97753", 00:27:21.782 "strip_size_kb": 64, 00:27:21.782 "state": "configuring", 00:27:21.782 "raid_level": "concat", 00:27:21.782 "superblock": true, 00:27:21.782 "num_base_bdevs": 4, 00:27:21.782 "num_base_bdevs_discovered": 0, 00:27:21.782 "num_base_bdevs_operational": 4, 00:27:21.782 "base_bdevs_list": [ 00:27:21.782 { 00:27:21.782 "name": "BaseBdev1", 00:27:21.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.782 "is_configured": false, 00:27:21.782 "data_offset": 0, 00:27:21.782 "data_size": 0 00:27:21.782 }, 00:27:21.782 { 00:27:21.782 "name": "BaseBdev2", 00:27:21.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.782 "is_configured": false, 00:27:21.782 "data_offset": 0, 00:27:21.782 "data_size": 0 00:27:21.782 }, 00:27:21.782 { 00:27:21.782 "name": "BaseBdev3", 00:27:21.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.782 "is_configured": false, 00:27:21.782 "data_offset": 0, 00:27:21.782 "data_size": 0 00:27:21.782 }, 00:27:21.782 { 00:27:21.782 "name": "BaseBdev4", 00:27:21.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.782 "is_configured": false, 00:27:21.782 "data_offset": 0, 00:27:21.782 "data_size": 0 00:27:21.782 } 00:27:21.782 ] 00:27:21.782 }' 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.782 17:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.041 [2024-11-26 17:23:52.144720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:22.041 [2024-11-26 17:23:52.144773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.041 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.303 [2024-11-26 17:23:52.156811] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:22.303 [2024-11-26 17:23:52.156868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:22.303 [2024-11-26 17:23:52.156881] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:22.303 [2024-11-26 17:23:52.156894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:22.303 [2024-11-26 17:23:52.156903] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:22.303 [2024-11-26 17:23:52.156916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:22.303 [2024-11-26 17:23:52.156923] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:22.303 [2024-11-26 17:23:52.156936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.303 [2024-11-26 17:23:52.212385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.303 BaseBdev1 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.303 [ 00:27:22.303 { 00:27:22.303 "name": "BaseBdev1", 00:27:22.303 "aliases": [ 00:27:22.303 "9faaec04-c420-4244-967a-4f65de45b2b8" 00:27:22.303 ], 00:27:22.303 "product_name": "Malloc disk", 00:27:22.303 "block_size": 512, 00:27:22.303 "num_blocks": 65536, 00:27:22.303 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:22.303 "assigned_rate_limits": { 00:27:22.303 "rw_ios_per_sec": 0, 00:27:22.303 "rw_mbytes_per_sec": 0, 00:27:22.303 "r_mbytes_per_sec": 0, 00:27:22.303 "w_mbytes_per_sec": 0 00:27:22.303 }, 00:27:22.303 "claimed": true, 00:27:22.303 "claim_type": "exclusive_write", 00:27:22.303 "zoned": false, 00:27:22.303 "supported_io_types": { 00:27:22.303 "read": true, 00:27:22.303 "write": true, 00:27:22.303 "unmap": true, 00:27:22.303 "flush": true, 00:27:22.303 "reset": true, 00:27:22.303 "nvme_admin": false, 00:27:22.303 "nvme_io": false, 00:27:22.303 "nvme_io_md": false, 00:27:22.303 "write_zeroes": true, 00:27:22.303 "zcopy": true, 00:27:22.303 "get_zone_info": false, 00:27:22.303 "zone_management": false, 00:27:22.303 "zone_append": false, 00:27:22.303 "compare": false, 00:27:22.303 "compare_and_write": false, 00:27:22.303 "abort": true, 00:27:22.303 "seek_hole": false, 00:27:22.303 "seek_data": false, 00:27:22.303 "copy": true, 00:27:22.303 "nvme_iov_md": false 00:27:22.303 }, 00:27:22.303 "memory_domains": [ 00:27:22.303 { 00:27:22.303 "dma_device_id": "system", 00:27:22.303 "dma_device_type": 1 00:27:22.303 }, 00:27:22.303 { 00:27:22.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.303 "dma_device_type": 2 00:27:22.303 } 00:27:22.303 ], 00:27:22.303 "driver_specific": {} 00:27:22.303 } 00:27:22.303 ] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:22.303 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.304 "name": "Existed_Raid", 00:27:22.304 "uuid": "75dd5cb7-09d6-4bb6-9d91-8d246c01aa88", 00:27:22.304 "strip_size_kb": 64, 00:27:22.304 "state": "configuring", 00:27:22.304 "raid_level": "concat", 00:27:22.304 "superblock": true, 00:27:22.304 "num_base_bdevs": 4, 00:27:22.304 "num_base_bdevs_discovered": 1, 00:27:22.304 "num_base_bdevs_operational": 4, 00:27:22.304 "base_bdevs_list": [ 00:27:22.304 { 00:27:22.304 "name": "BaseBdev1", 00:27:22.304 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:22.304 "is_configured": true, 00:27:22.304 "data_offset": 2048, 00:27:22.304 "data_size": 63488 00:27:22.304 }, 00:27:22.304 { 00:27:22.304 "name": "BaseBdev2", 00:27:22.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.304 "is_configured": false, 00:27:22.304 "data_offset": 0, 00:27:22.304 "data_size": 0 00:27:22.304 }, 00:27:22.304 { 00:27:22.304 "name": "BaseBdev3", 00:27:22.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.304 "is_configured": false, 00:27:22.304 "data_offset": 0, 00:27:22.304 "data_size": 0 00:27:22.304 }, 00:27:22.304 { 00:27:22.304 "name": "BaseBdev4", 00:27:22.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.304 "is_configured": false, 00:27:22.304 "data_offset": 0, 00:27:22.304 "data_size": 0 00:27:22.304 } 00:27:22.304 ] 00:27:22.304 }' 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.304 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 [2024-11-26 17:23:52.583894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:22.562 [2024-11-26 17:23:52.583971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 [2024-11-26 17:23:52.595992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.562 [2024-11-26 17:23:52.598318] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:22.562 [2024-11-26 17:23:52.598371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:22.562 [2024-11-26 17:23:52.598383] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:22.562 [2024-11-26 17:23:52.598398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:22.562 [2024-11-26 17:23:52.598406] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:22.562 [2024-11-26 17:23:52.598419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.562 "name": "Existed_Raid", 00:27:22.562 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:22.562 "strip_size_kb": 64, 00:27:22.562 "state": "configuring", 00:27:22.562 "raid_level": "concat", 00:27:22.562 "superblock": true, 00:27:22.562 "num_base_bdevs": 4, 00:27:22.562 "num_base_bdevs_discovered": 1, 00:27:22.562 "num_base_bdevs_operational": 4, 00:27:22.562 "base_bdevs_list": [ 00:27:22.562 { 00:27:22.562 "name": "BaseBdev1", 00:27:22.562 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:22.562 "is_configured": true, 00:27:22.562 "data_offset": 2048, 00:27:22.562 "data_size": 63488 00:27:22.562 }, 00:27:22.562 { 00:27:22.562 "name": "BaseBdev2", 00:27:22.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.562 "is_configured": false, 00:27:22.562 "data_offset": 0, 00:27:22.562 "data_size": 0 00:27:22.562 }, 00:27:22.562 { 00:27:22.562 "name": "BaseBdev3", 00:27:22.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.562 "is_configured": false, 00:27:22.562 "data_offset": 0, 00:27:22.562 "data_size": 0 00:27:22.562 }, 00:27:22.562 { 00:27:22.562 "name": "BaseBdev4", 00:27:22.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.562 "is_configured": false, 00:27:22.562 "data_offset": 0, 00:27:22.562 "data_size": 0 00:27:22.562 } 00:27:22.562 ] 00:27:22.562 }' 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.562 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.130 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:23.130 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.130 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.130 [2024-11-26 17:23:53.034677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:23.130 BaseBdev2 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.130 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.130 [ 00:27:23.130 { 00:27:23.130 "name": "BaseBdev2", 00:27:23.130 "aliases": [ 00:27:23.130 "8ca57739-3422-4b56-8783-7ae64ba37681" 00:27:23.130 ], 00:27:23.130 "product_name": "Malloc disk", 00:27:23.130 "block_size": 512, 00:27:23.130 "num_blocks": 65536, 00:27:23.130 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:23.130 "assigned_rate_limits": { 00:27:23.131 "rw_ios_per_sec": 0, 00:27:23.131 "rw_mbytes_per_sec": 0, 00:27:23.131 "r_mbytes_per_sec": 0, 00:27:23.131 "w_mbytes_per_sec": 0 00:27:23.131 }, 00:27:23.131 "claimed": true, 00:27:23.131 "claim_type": "exclusive_write", 00:27:23.131 "zoned": false, 00:27:23.131 "supported_io_types": { 00:27:23.131 "read": true, 00:27:23.131 "write": true, 00:27:23.131 "unmap": true, 00:27:23.131 "flush": true, 00:27:23.131 "reset": true, 00:27:23.131 "nvme_admin": false, 00:27:23.131 "nvme_io": false, 00:27:23.131 "nvme_io_md": false, 00:27:23.131 "write_zeroes": true, 00:27:23.131 "zcopy": true, 00:27:23.131 "get_zone_info": false, 00:27:23.131 "zone_management": false, 00:27:23.131 "zone_append": false, 00:27:23.131 "compare": false, 00:27:23.131 "compare_and_write": false, 00:27:23.131 "abort": true, 00:27:23.131 "seek_hole": false, 00:27:23.131 "seek_data": false, 00:27:23.131 "copy": true, 00:27:23.131 "nvme_iov_md": false 00:27:23.131 }, 00:27:23.131 "memory_domains": [ 00:27:23.131 { 00:27:23.131 "dma_device_id": "system", 00:27:23.131 "dma_device_type": 1 00:27:23.131 }, 00:27:23.131 { 00:27:23.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.131 "dma_device_type": 2 00:27:23.131 } 00:27:23.131 ], 00:27:23.131 "driver_specific": {} 00:27:23.131 } 00:27:23.131 ] 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.131 "name": "Existed_Raid", 00:27:23.131 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:23.131 "strip_size_kb": 64, 00:27:23.131 "state": "configuring", 00:27:23.131 "raid_level": "concat", 00:27:23.131 "superblock": true, 00:27:23.131 "num_base_bdevs": 4, 00:27:23.131 "num_base_bdevs_discovered": 2, 00:27:23.131 "num_base_bdevs_operational": 4, 00:27:23.131 "base_bdevs_list": [ 00:27:23.131 { 00:27:23.131 "name": "BaseBdev1", 00:27:23.131 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:23.131 "is_configured": true, 00:27:23.131 "data_offset": 2048, 00:27:23.131 "data_size": 63488 00:27:23.131 }, 00:27:23.131 { 00:27:23.131 "name": "BaseBdev2", 00:27:23.131 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:23.131 "is_configured": true, 00:27:23.131 "data_offset": 2048, 00:27:23.131 "data_size": 63488 00:27:23.131 }, 00:27:23.131 { 00:27:23.131 "name": "BaseBdev3", 00:27:23.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.131 "is_configured": false, 00:27:23.131 "data_offset": 0, 00:27:23.131 "data_size": 0 00:27:23.131 }, 00:27:23.131 { 00:27:23.131 "name": "BaseBdev4", 00:27:23.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.131 "is_configured": false, 00:27:23.131 "data_offset": 0, 00:27:23.131 "data_size": 0 00:27:23.131 } 00:27:23.131 ] 00:27:23.131 }' 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.131 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.697 [2024-11-26 17:23:53.562741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:23.697 BaseBdev3 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:23.697 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.698 [ 00:27:23.698 { 00:27:23.698 "name": "BaseBdev3", 00:27:23.698 "aliases": [ 00:27:23.698 "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6" 00:27:23.698 ], 00:27:23.698 "product_name": "Malloc disk", 00:27:23.698 "block_size": 512, 00:27:23.698 "num_blocks": 65536, 00:27:23.698 "uuid": "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6", 00:27:23.698 "assigned_rate_limits": { 00:27:23.698 "rw_ios_per_sec": 0, 00:27:23.698 "rw_mbytes_per_sec": 0, 00:27:23.698 "r_mbytes_per_sec": 0, 00:27:23.698 "w_mbytes_per_sec": 0 00:27:23.698 }, 00:27:23.698 "claimed": true, 00:27:23.698 "claim_type": "exclusive_write", 00:27:23.698 "zoned": false, 00:27:23.698 "supported_io_types": { 00:27:23.698 "read": true, 00:27:23.698 "write": true, 00:27:23.698 "unmap": true, 00:27:23.698 "flush": true, 00:27:23.698 "reset": true, 00:27:23.698 "nvme_admin": false, 00:27:23.698 "nvme_io": false, 00:27:23.698 "nvme_io_md": false, 00:27:23.698 "write_zeroes": true, 00:27:23.698 "zcopy": true, 00:27:23.698 "get_zone_info": false, 00:27:23.698 "zone_management": false, 00:27:23.698 "zone_append": false, 00:27:23.698 "compare": false, 00:27:23.698 "compare_and_write": false, 00:27:23.698 "abort": true, 00:27:23.698 "seek_hole": false, 00:27:23.698 "seek_data": false, 00:27:23.698 "copy": true, 00:27:23.698 "nvme_iov_md": false 00:27:23.698 }, 00:27:23.698 "memory_domains": [ 00:27:23.698 { 00:27:23.698 "dma_device_id": "system", 00:27:23.698 "dma_device_type": 1 00:27:23.698 }, 00:27:23.698 { 00:27:23.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.698 "dma_device_type": 2 00:27:23.698 } 00:27:23.698 ], 00:27:23.698 "driver_specific": {} 00:27:23.698 } 00:27:23.698 ] 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.698 "name": "Existed_Raid", 00:27:23.698 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:23.698 "strip_size_kb": 64, 00:27:23.698 "state": "configuring", 00:27:23.698 "raid_level": "concat", 00:27:23.698 "superblock": true, 00:27:23.698 "num_base_bdevs": 4, 00:27:23.698 "num_base_bdevs_discovered": 3, 00:27:23.698 "num_base_bdevs_operational": 4, 00:27:23.698 "base_bdevs_list": [ 00:27:23.698 { 00:27:23.698 "name": "BaseBdev1", 00:27:23.698 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:23.698 "is_configured": true, 00:27:23.698 "data_offset": 2048, 00:27:23.698 "data_size": 63488 00:27:23.698 }, 00:27:23.698 { 00:27:23.698 "name": "BaseBdev2", 00:27:23.698 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:23.698 "is_configured": true, 00:27:23.698 "data_offset": 2048, 00:27:23.698 "data_size": 63488 00:27:23.698 }, 00:27:23.698 { 00:27:23.698 "name": "BaseBdev3", 00:27:23.698 "uuid": "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6", 00:27:23.698 "is_configured": true, 00:27:23.698 "data_offset": 2048, 00:27:23.698 "data_size": 63488 00:27:23.698 }, 00:27:23.698 { 00:27:23.698 "name": "BaseBdev4", 00:27:23.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.698 "is_configured": false, 00:27:23.698 "data_offset": 0, 00:27:23.698 "data_size": 0 00:27:23.698 } 00:27:23.698 ] 00:27:23.698 }' 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.698 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.956 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:23.956 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.956 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 [2024-11-26 17:23:54.076417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:24.216 [2024-11-26 17:23:54.076751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:24.216 [2024-11-26 17:23:54.076770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:24.216 [2024-11-26 17:23:54.077079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:24.216 [2024-11-26 17:23:54.077231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:24.216 [2024-11-26 17:23:54.077251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:24.216 BaseBdev4 00:27:24.216 [2024-11-26 17:23:54.077406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 [ 00:27:24.216 { 00:27:24.216 "name": "BaseBdev4", 00:27:24.216 "aliases": [ 00:27:24.216 "75d85656-c27d-4382-8e74-7fb2cb7ab9fb" 00:27:24.216 ], 00:27:24.216 "product_name": "Malloc disk", 00:27:24.216 "block_size": 512, 00:27:24.216 "num_blocks": 65536, 00:27:24.216 "uuid": "75d85656-c27d-4382-8e74-7fb2cb7ab9fb", 00:27:24.216 "assigned_rate_limits": { 00:27:24.216 "rw_ios_per_sec": 0, 00:27:24.216 "rw_mbytes_per_sec": 0, 00:27:24.216 "r_mbytes_per_sec": 0, 00:27:24.216 "w_mbytes_per_sec": 0 00:27:24.216 }, 00:27:24.216 "claimed": true, 00:27:24.216 "claim_type": "exclusive_write", 00:27:24.216 "zoned": false, 00:27:24.216 "supported_io_types": { 00:27:24.216 "read": true, 00:27:24.216 "write": true, 00:27:24.216 "unmap": true, 00:27:24.216 "flush": true, 00:27:24.216 "reset": true, 00:27:24.216 "nvme_admin": false, 00:27:24.216 "nvme_io": false, 00:27:24.216 "nvme_io_md": false, 00:27:24.216 "write_zeroes": true, 00:27:24.216 "zcopy": true, 00:27:24.216 "get_zone_info": false, 00:27:24.216 "zone_management": false, 00:27:24.216 "zone_append": false, 00:27:24.216 "compare": false, 00:27:24.216 "compare_and_write": false, 00:27:24.216 "abort": true, 00:27:24.216 "seek_hole": false, 00:27:24.216 "seek_data": false, 00:27:24.216 "copy": true, 00:27:24.216 "nvme_iov_md": false 00:27:24.216 }, 00:27:24.216 "memory_domains": [ 00:27:24.216 { 00:27:24.216 "dma_device_id": "system", 00:27:24.216 "dma_device_type": 1 00:27:24.216 }, 00:27:24.216 { 00:27:24.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.216 "dma_device_type": 2 00:27:24.216 } 00:27:24.216 ], 00:27:24.216 "driver_specific": {} 00:27:24.216 } 00:27:24.216 ] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.216 "name": "Existed_Raid", 00:27:24.216 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:24.216 "strip_size_kb": 64, 00:27:24.216 "state": "online", 00:27:24.216 "raid_level": "concat", 00:27:24.216 "superblock": true, 00:27:24.216 "num_base_bdevs": 4, 00:27:24.216 "num_base_bdevs_discovered": 4, 00:27:24.216 "num_base_bdevs_operational": 4, 00:27:24.216 "base_bdevs_list": [ 00:27:24.216 { 00:27:24.216 "name": "BaseBdev1", 00:27:24.216 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:24.216 "is_configured": true, 00:27:24.216 "data_offset": 2048, 00:27:24.216 "data_size": 63488 00:27:24.216 }, 00:27:24.216 { 00:27:24.216 "name": "BaseBdev2", 00:27:24.216 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:24.216 "is_configured": true, 00:27:24.216 "data_offset": 2048, 00:27:24.216 "data_size": 63488 00:27:24.216 }, 00:27:24.216 { 00:27:24.216 "name": "BaseBdev3", 00:27:24.216 "uuid": "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6", 00:27:24.216 "is_configured": true, 00:27:24.216 "data_offset": 2048, 00:27:24.216 "data_size": 63488 00:27:24.216 }, 00:27:24.216 { 00:27:24.216 "name": "BaseBdev4", 00:27:24.216 "uuid": "75d85656-c27d-4382-8e74-7fb2cb7ab9fb", 00:27:24.216 "is_configured": true, 00:27:24.216 "data_offset": 2048, 00:27:24.216 "data_size": 63488 00:27:24.216 } 00:27:24.216 ] 00:27:24.216 }' 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.216 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.476 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.476 [2024-11-26 17:23:54.564172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.735 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.735 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.735 "name": "Existed_Raid", 00:27:24.735 "aliases": [ 00:27:24.735 "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28" 00:27:24.735 ], 00:27:24.735 "product_name": "Raid Volume", 00:27:24.735 "block_size": 512, 00:27:24.735 "num_blocks": 253952, 00:27:24.735 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:24.735 "assigned_rate_limits": { 00:27:24.735 "rw_ios_per_sec": 0, 00:27:24.735 "rw_mbytes_per_sec": 0, 00:27:24.735 "r_mbytes_per_sec": 0, 00:27:24.735 "w_mbytes_per_sec": 0 00:27:24.735 }, 00:27:24.735 "claimed": false, 00:27:24.735 "zoned": false, 00:27:24.735 "supported_io_types": { 00:27:24.735 "read": true, 00:27:24.735 "write": true, 00:27:24.735 "unmap": true, 00:27:24.735 "flush": true, 00:27:24.735 "reset": true, 00:27:24.735 "nvme_admin": false, 00:27:24.735 "nvme_io": false, 00:27:24.735 "nvme_io_md": false, 00:27:24.735 "write_zeroes": true, 00:27:24.735 "zcopy": false, 00:27:24.735 "get_zone_info": false, 00:27:24.735 "zone_management": false, 00:27:24.735 "zone_append": false, 00:27:24.735 "compare": false, 00:27:24.735 "compare_and_write": false, 00:27:24.735 "abort": false, 00:27:24.735 "seek_hole": false, 00:27:24.735 "seek_data": false, 00:27:24.735 "copy": false, 00:27:24.735 "nvme_iov_md": false 00:27:24.735 }, 00:27:24.735 "memory_domains": [ 00:27:24.735 { 00:27:24.735 "dma_device_id": "system", 00:27:24.735 "dma_device_type": 1 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.735 "dma_device_type": 2 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "system", 00:27:24.735 "dma_device_type": 1 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.735 "dma_device_type": 2 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "system", 00:27:24.735 "dma_device_type": 1 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.735 "dma_device_type": 2 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "system", 00:27:24.735 "dma_device_type": 1 00:27:24.735 }, 00:27:24.735 { 00:27:24.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.735 "dma_device_type": 2 00:27:24.735 } 00:27:24.735 ], 00:27:24.735 "driver_specific": { 00:27:24.735 "raid": { 00:27:24.735 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:24.735 "strip_size_kb": 64, 00:27:24.736 "state": "online", 00:27:24.736 "raid_level": "concat", 00:27:24.736 "superblock": true, 00:27:24.736 "num_base_bdevs": 4, 00:27:24.736 "num_base_bdevs_discovered": 4, 00:27:24.736 "num_base_bdevs_operational": 4, 00:27:24.736 "base_bdevs_list": [ 00:27:24.736 { 00:27:24.736 "name": "BaseBdev1", 00:27:24.736 "uuid": "9faaec04-c420-4244-967a-4f65de45b2b8", 00:27:24.736 "is_configured": true, 00:27:24.736 "data_offset": 2048, 00:27:24.736 "data_size": 63488 00:27:24.736 }, 00:27:24.736 { 00:27:24.736 "name": "BaseBdev2", 00:27:24.736 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:24.736 "is_configured": true, 00:27:24.736 "data_offset": 2048, 00:27:24.736 "data_size": 63488 00:27:24.736 }, 00:27:24.736 { 00:27:24.736 "name": "BaseBdev3", 00:27:24.736 "uuid": "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6", 00:27:24.736 "is_configured": true, 00:27:24.736 "data_offset": 2048, 00:27:24.736 "data_size": 63488 00:27:24.736 }, 00:27:24.736 { 00:27:24.736 "name": "BaseBdev4", 00:27:24.736 "uuid": "75d85656-c27d-4382-8e74-7fb2cb7ab9fb", 00:27:24.736 "is_configured": true, 00:27:24.736 "data_offset": 2048, 00:27:24.736 "data_size": 63488 00:27:24.736 } 00:27:24.736 ] 00:27:24.736 } 00:27:24.736 } 00:27:24.736 }' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:24.736 BaseBdev2 00:27:24.736 BaseBdev3 00:27:24.736 BaseBdev4' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.736 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 [2024-11-26 17:23:54.879788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.995 [2024-11-26 17:23:54.879832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.995 [2024-11-26 17:23:54.879895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.995 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:24.995 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.995 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.995 "name": "Existed_Raid", 00:27:24.995 "uuid": "38b6f6a8-e603-4e1c-96ee-5a0fb1b27f28", 00:27:24.995 "strip_size_kb": 64, 00:27:24.995 "state": "offline", 00:27:24.995 "raid_level": "concat", 00:27:24.995 "superblock": true, 00:27:24.995 "num_base_bdevs": 4, 00:27:24.995 "num_base_bdevs_discovered": 3, 00:27:24.995 "num_base_bdevs_operational": 3, 00:27:24.995 "base_bdevs_list": [ 00:27:24.995 { 00:27:24.995 "name": null, 00:27:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.995 "is_configured": false, 00:27:24.995 "data_offset": 0, 00:27:24.995 "data_size": 63488 00:27:24.995 }, 00:27:24.995 { 00:27:24.995 "name": "BaseBdev2", 00:27:24.995 "uuid": "8ca57739-3422-4b56-8783-7ae64ba37681", 00:27:24.995 "is_configured": true, 00:27:24.995 "data_offset": 2048, 00:27:24.995 "data_size": 63488 00:27:24.995 }, 00:27:24.995 { 00:27:24.995 "name": "BaseBdev3", 00:27:24.995 "uuid": "2560f2a0-ed8c-4a86-94e9-5585d0f0f6b6", 00:27:24.995 "is_configured": true, 00:27:24.995 "data_offset": 2048, 00:27:24.995 "data_size": 63488 00:27:24.995 }, 00:27:24.995 { 00:27:24.995 "name": "BaseBdev4", 00:27:24.995 "uuid": "75d85656-c27d-4382-8e74-7fb2cb7ab9fb", 00:27:24.995 "is_configured": true, 00:27:24.995 "data_offset": 2048, 00:27:24.995 "data_size": 63488 00:27:24.995 } 00:27:24.995 ] 00:27:24.995 }' 00:27:24.995 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.995 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.564 [2024-11-26 17:23:55.450752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.564 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.564 [2024-11-26 17:23:55.602538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 [2024-11-26 17:23:55.756755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:25.824 [2024-11-26 17:23:55.756822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.083 BaseBdev2 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.083 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.083 [ 00:27:26.083 { 00:27:26.083 "name": "BaseBdev2", 00:27:26.083 "aliases": [ 00:27:26.083 "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf" 00:27:26.083 ], 00:27:26.083 "product_name": "Malloc disk", 00:27:26.083 "block_size": 512, 00:27:26.083 "num_blocks": 65536, 00:27:26.083 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:26.083 "assigned_rate_limits": { 00:27:26.083 "rw_ios_per_sec": 0, 00:27:26.084 "rw_mbytes_per_sec": 0, 00:27:26.084 "r_mbytes_per_sec": 0, 00:27:26.084 "w_mbytes_per_sec": 0 00:27:26.084 }, 00:27:26.084 "claimed": false, 00:27:26.084 "zoned": false, 00:27:26.084 "supported_io_types": { 00:27:26.084 "read": true, 00:27:26.084 "write": true, 00:27:26.084 "unmap": true, 00:27:26.084 "flush": true, 00:27:26.084 "reset": true, 00:27:26.084 "nvme_admin": false, 00:27:26.084 "nvme_io": false, 00:27:26.084 "nvme_io_md": false, 00:27:26.084 "write_zeroes": true, 00:27:26.084 "zcopy": true, 00:27:26.084 "get_zone_info": false, 00:27:26.084 "zone_management": false, 00:27:26.084 "zone_append": false, 00:27:26.084 "compare": false, 00:27:26.084 "compare_and_write": false, 00:27:26.084 "abort": true, 00:27:26.084 "seek_hole": false, 00:27:26.084 "seek_data": false, 00:27:26.084 "copy": true, 00:27:26.084 "nvme_iov_md": false 00:27:26.084 }, 00:27:26.084 "memory_domains": [ 00:27:26.084 { 00:27:26.084 "dma_device_id": "system", 00:27:26.084 "dma_device_type": 1 00:27:26.084 }, 00:27:26.084 { 00:27:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.084 "dma_device_type": 2 00:27:26.084 } 00:27:26.084 ], 00:27:26.084 "driver_specific": {} 00:27:26.084 } 00:27:26.084 ] 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 BaseBdev3 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 [ 00:27:26.084 { 00:27:26.084 "name": "BaseBdev3", 00:27:26.084 "aliases": [ 00:27:26.084 "3b1e23a2-63cb-49f4-abc4-bf8e835852f9" 00:27:26.084 ], 00:27:26.084 "product_name": "Malloc disk", 00:27:26.084 "block_size": 512, 00:27:26.084 "num_blocks": 65536, 00:27:26.084 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:26.084 "assigned_rate_limits": { 00:27:26.084 "rw_ios_per_sec": 0, 00:27:26.084 "rw_mbytes_per_sec": 0, 00:27:26.084 "r_mbytes_per_sec": 0, 00:27:26.084 "w_mbytes_per_sec": 0 00:27:26.084 }, 00:27:26.084 "claimed": false, 00:27:26.084 "zoned": false, 00:27:26.084 "supported_io_types": { 00:27:26.084 "read": true, 00:27:26.084 "write": true, 00:27:26.084 "unmap": true, 00:27:26.084 "flush": true, 00:27:26.084 "reset": true, 00:27:26.084 "nvme_admin": false, 00:27:26.084 "nvme_io": false, 00:27:26.084 "nvme_io_md": false, 00:27:26.084 "write_zeroes": true, 00:27:26.084 "zcopy": true, 00:27:26.084 "get_zone_info": false, 00:27:26.084 "zone_management": false, 00:27:26.084 "zone_append": false, 00:27:26.084 "compare": false, 00:27:26.084 "compare_and_write": false, 00:27:26.084 "abort": true, 00:27:26.084 "seek_hole": false, 00:27:26.084 "seek_data": false, 00:27:26.084 "copy": true, 00:27:26.084 "nvme_iov_md": false 00:27:26.084 }, 00:27:26.084 "memory_domains": [ 00:27:26.084 { 00:27:26.084 "dma_device_id": "system", 00:27:26.084 "dma_device_type": 1 00:27:26.084 }, 00:27:26.084 { 00:27:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.084 "dma_device_type": 2 00:27:26.084 } 00:27:26.084 ], 00:27:26.084 "driver_specific": {} 00:27:26.084 } 00:27:26.084 ] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 BaseBdev4 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 [ 00:27:26.084 { 00:27:26.084 "name": "BaseBdev4", 00:27:26.084 "aliases": [ 00:27:26.084 "2dc45260-8673-4e7b-86cc-0749be02d347" 00:27:26.084 ], 00:27:26.084 "product_name": "Malloc disk", 00:27:26.084 "block_size": 512, 00:27:26.084 "num_blocks": 65536, 00:27:26.084 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:26.084 "assigned_rate_limits": { 00:27:26.084 "rw_ios_per_sec": 0, 00:27:26.084 "rw_mbytes_per_sec": 0, 00:27:26.084 "r_mbytes_per_sec": 0, 00:27:26.084 "w_mbytes_per_sec": 0 00:27:26.084 }, 00:27:26.084 "claimed": false, 00:27:26.084 "zoned": false, 00:27:26.084 "supported_io_types": { 00:27:26.084 "read": true, 00:27:26.084 "write": true, 00:27:26.084 "unmap": true, 00:27:26.084 "flush": true, 00:27:26.084 "reset": true, 00:27:26.084 "nvme_admin": false, 00:27:26.084 "nvme_io": false, 00:27:26.084 "nvme_io_md": false, 00:27:26.084 "write_zeroes": true, 00:27:26.084 "zcopy": true, 00:27:26.084 "get_zone_info": false, 00:27:26.084 "zone_management": false, 00:27:26.084 "zone_append": false, 00:27:26.084 "compare": false, 00:27:26.084 "compare_and_write": false, 00:27:26.084 "abort": true, 00:27:26.084 "seek_hole": false, 00:27:26.084 "seek_data": false, 00:27:26.084 "copy": true, 00:27:26.084 "nvme_iov_md": false 00:27:26.084 }, 00:27:26.084 "memory_domains": [ 00:27:26.084 { 00:27:26.084 "dma_device_id": "system", 00:27:26.084 "dma_device_type": 1 00:27:26.084 }, 00:27:26.084 { 00:27:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.084 "dma_device_type": 2 00:27:26.084 } 00:27:26.084 ], 00:27:26.084 "driver_specific": {} 00:27:26.084 } 00:27:26.084 ] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.084 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.084 [2024-11-26 17:23:56.114188] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:26.084 [2024-11-26 17:23:56.114242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:26.084 [2024-11-26 17:23:56.114272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.084 [2024-11-26 17:23:56.116660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:26.085 [2024-11-26 17:23:56.116722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.085 "name": "Existed_Raid", 00:27:26.085 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:26.085 "strip_size_kb": 64, 00:27:26.085 "state": "configuring", 00:27:26.085 "raid_level": "concat", 00:27:26.085 "superblock": true, 00:27:26.085 "num_base_bdevs": 4, 00:27:26.085 "num_base_bdevs_discovered": 3, 00:27:26.085 "num_base_bdevs_operational": 4, 00:27:26.085 "base_bdevs_list": [ 00:27:26.085 { 00:27:26.085 "name": "BaseBdev1", 00:27:26.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.085 "is_configured": false, 00:27:26.085 "data_offset": 0, 00:27:26.085 "data_size": 0 00:27:26.085 }, 00:27:26.085 { 00:27:26.085 "name": "BaseBdev2", 00:27:26.085 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:26.085 "is_configured": true, 00:27:26.085 "data_offset": 2048, 00:27:26.085 "data_size": 63488 00:27:26.085 }, 00:27:26.085 { 00:27:26.085 "name": "BaseBdev3", 00:27:26.085 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:26.085 "is_configured": true, 00:27:26.085 "data_offset": 2048, 00:27:26.085 "data_size": 63488 00:27:26.085 }, 00:27:26.085 { 00:27:26.085 "name": "BaseBdev4", 00:27:26.085 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:26.085 "is_configured": true, 00:27:26.085 "data_offset": 2048, 00:27:26.085 "data_size": 63488 00:27:26.085 } 00:27:26.085 ] 00:27:26.085 }' 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.085 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.656 [2024-11-26 17:23:56.545714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.656 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.657 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.657 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.657 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.657 "name": "Existed_Raid", 00:27:26.657 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:26.657 "strip_size_kb": 64, 00:27:26.657 "state": "configuring", 00:27:26.657 "raid_level": "concat", 00:27:26.657 "superblock": true, 00:27:26.657 "num_base_bdevs": 4, 00:27:26.657 "num_base_bdevs_discovered": 2, 00:27:26.657 "num_base_bdevs_operational": 4, 00:27:26.657 "base_bdevs_list": [ 00:27:26.657 { 00:27:26.657 "name": "BaseBdev1", 00:27:26.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.657 "is_configured": false, 00:27:26.657 "data_offset": 0, 00:27:26.657 "data_size": 0 00:27:26.657 }, 00:27:26.657 { 00:27:26.657 "name": null, 00:27:26.657 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:26.657 "is_configured": false, 00:27:26.657 "data_offset": 0, 00:27:26.657 "data_size": 63488 00:27:26.657 }, 00:27:26.657 { 00:27:26.657 "name": "BaseBdev3", 00:27:26.657 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:26.657 "is_configured": true, 00:27:26.657 "data_offset": 2048, 00:27:26.657 "data_size": 63488 00:27:26.657 }, 00:27:26.657 { 00:27:26.657 "name": "BaseBdev4", 00:27:26.657 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:26.657 "is_configured": true, 00:27:26.657 "data_offset": 2048, 00:27:26.657 "data_size": 63488 00:27:26.657 } 00:27:26.657 ] 00:27:26.657 }' 00:27:26.657 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.657 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.915 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:26.915 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.915 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.915 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.915 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.175 [2024-11-26 17:23:57.083883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:27.175 BaseBdev1 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.175 [ 00:27:27.175 { 00:27:27.175 "name": "BaseBdev1", 00:27:27.175 "aliases": [ 00:27:27.175 "feba913c-d27b-4bb6-9982-d83924c3d773" 00:27:27.175 ], 00:27:27.175 "product_name": "Malloc disk", 00:27:27.175 "block_size": 512, 00:27:27.175 "num_blocks": 65536, 00:27:27.175 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:27.175 "assigned_rate_limits": { 00:27:27.175 "rw_ios_per_sec": 0, 00:27:27.175 "rw_mbytes_per_sec": 0, 00:27:27.175 "r_mbytes_per_sec": 0, 00:27:27.175 "w_mbytes_per_sec": 0 00:27:27.175 }, 00:27:27.175 "claimed": true, 00:27:27.175 "claim_type": "exclusive_write", 00:27:27.175 "zoned": false, 00:27:27.175 "supported_io_types": { 00:27:27.175 "read": true, 00:27:27.175 "write": true, 00:27:27.175 "unmap": true, 00:27:27.175 "flush": true, 00:27:27.175 "reset": true, 00:27:27.175 "nvme_admin": false, 00:27:27.175 "nvme_io": false, 00:27:27.175 "nvme_io_md": false, 00:27:27.175 "write_zeroes": true, 00:27:27.175 "zcopy": true, 00:27:27.175 "get_zone_info": false, 00:27:27.175 "zone_management": false, 00:27:27.175 "zone_append": false, 00:27:27.175 "compare": false, 00:27:27.175 "compare_and_write": false, 00:27:27.175 "abort": true, 00:27:27.175 "seek_hole": false, 00:27:27.175 "seek_data": false, 00:27:27.175 "copy": true, 00:27:27.175 "nvme_iov_md": false 00:27:27.175 }, 00:27:27.175 "memory_domains": [ 00:27:27.175 { 00:27:27.175 "dma_device_id": "system", 00:27:27.175 "dma_device_type": 1 00:27:27.175 }, 00:27:27.175 { 00:27:27.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.175 "dma_device_type": 2 00:27:27.175 } 00:27:27.175 ], 00:27:27.175 "driver_specific": {} 00:27:27.175 } 00:27:27.175 ] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.175 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.175 "name": "Existed_Raid", 00:27:27.175 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:27.175 "strip_size_kb": 64, 00:27:27.175 "state": "configuring", 00:27:27.175 "raid_level": "concat", 00:27:27.175 "superblock": true, 00:27:27.175 "num_base_bdevs": 4, 00:27:27.175 "num_base_bdevs_discovered": 3, 00:27:27.175 "num_base_bdevs_operational": 4, 00:27:27.175 "base_bdevs_list": [ 00:27:27.175 { 00:27:27.175 "name": "BaseBdev1", 00:27:27.175 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:27.175 "is_configured": true, 00:27:27.175 "data_offset": 2048, 00:27:27.175 "data_size": 63488 00:27:27.175 }, 00:27:27.175 { 00:27:27.175 "name": null, 00:27:27.175 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:27.175 "is_configured": false, 00:27:27.175 "data_offset": 0, 00:27:27.175 "data_size": 63488 00:27:27.175 }, 00:27:27.175 { 00:27:27.176 "name": "BaseBdev3", 00:27:27.176 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:27.176 "is_configured": true, 00:27:27.176 "data_offset": 2048, 00:27:27.176 "data_size": 63488 00:27:27.176 }, 00:27:27.176 { 00:27:27.176 "name": "BaseBdev4", 00:27:27.176 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:27.176 "is_configured": true, 00:27:27.176 "data_offset": 2048, 00:27:27.176 "data_size": 63488 00:27:27.176 } 00:27:27.176 ] 00:27:27.176 }' 00:27:27.176 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.176 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.434 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.434 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:27.434 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.434 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.693 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.693 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:27.693 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:27.693 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.693 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.694 [2024-11-26 17:23:57.583331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.694 "name": "Existed_Raid", 00:27:27.694 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:27.694 "strip_size_kb": 64, 00:27:27.694 "state": "configuring", 00:27:27.694 "raid_level": "concat", 00:27:27.694 "superblock": true, 00:27:27.694 "num_base_bdevs": 4, 00:27:27.694 "num_base_bdevs_discovered": 2, 00:27:27.694 "num_base_bdevs_operational": 4, 00:27:27.694 "base_bdevs_list": [ 00:27:27.694 { 00:27:27.694 "name": "BaseBdev1", 00:27:27.694 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:27.694 "is_configured": true, 00:27:27.694 "data_offset": 2048, 00:27:27.694 "data_size": 63488 00:27:27.694 }, 00:27:27.694 { 00:27:27.694 "name": null, 00:27:27.694 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:27.694 "is_configured": false, 00:27:27.694 "data_offset": 0, 00:27:27.694 "data_size": 63488 00:27:27.694 }, 00:27:27.694 { 00:27:27.694 "name": null, 00:27:27.694 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:27.694 "is_configured": false, 00:27:27.694 "data_offset": 0, 00:27:27.694 "data_size": 63488 00:27:27.694 }, 00:27:27.694 { 00:27:27.694 "name": "BaseBdev4", 00:27:27.694 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:27.694 "is_configured": true, 00:27:27.694 "data_offset": 2048, 00:27:27.694 "data_size": 63488 00:27:27.694 } 00:27:27.694 ] 00:27:27.694 }' 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.694 17:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.952 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.952 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.952 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.952 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.211 [2024-11-26 17:23:58.110624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.211 "name": "Existed_Raid", 00:27:28.211 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:28.211 "strip_size_kb": 64, 00:27:28.211 "state": "configuring", 00:27:28.211 "raid_level": "concat", 00:27:28.211 "superblock": true, 00:27:28.211 "num_base_bdevs": 4, 00:27:28.211 "num_base_bdevs_discovered": 3, 00:27:28.211 "num_base_bdevs_operational": 4, 00:27:28.211 "base_bdevs_list": [ 00:27:28.211 { 00:27:28.211 "name": "BaseBdev1", 00:27:28.211 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:28.211 "is_configured": true, 00:27:28.211 "data_offset": 2048, 00:27:28.211 "data_size": 63488 00:27:28.211 }, 00:27:28.211 { 00:27:28.211 "name": null, 00:27:28.211 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:28.211 "is_configured": false, 00:27:28.211 "data_offset": 0, 00:27:28.211 "data_size": 63488 00:27:28.211 }, 00:27:28.211 { 00:27:28.211 "name": "BaseBdev3", 00:27:28.211 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:28.211 "is_configured": true, 00:27:28.211 "data_offset": 2048, 00:27:28.211 "data_size": 63488 00:27:28.211 }, 00:27:28.211 { 00:27:28.211 "name": "BaseBdev4", 00:27:28.211 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:28.211 "is_configured": true, 00:27:28.211 "data_offset": 2048, 00:27:28.211 "data_size": 63488 00:27:28.211 } 00:27:28.211 ] 00:27:28.211 }' 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.211 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.471 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.471 [2024-11-26 17:23:58.554009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.730 "name": "Existed_Raid", 00:27:28.730 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:28.730 "strip_size_kb": 64, 00:27:28.730 "state": "configuring", 00:27:28.730 "raid_level": "concat", 00:27:28.730 "superblock": true, 00:27:28.730 "num_base_bdevs": 4, 00:27:28.730 "num_base_bdevs_discovered": 2, 00:27:28.730 "num_base_bdevs_operational": 4, 00:27:28.730 "base_bdevs_list": [ 00:27:28.730 { 00:27:28.730 "name": null, 00:27:28.730 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:28.730 "is_configured": false, 00:27:28.730 "data_offset": 0, 00:27:28.730 "data_size": 63488 00:27:28.730 }, 00:27:28.730 { 00:27:28.730 "name": null, 00:27:28.730 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:28.730 "is_configured": false, 00:27:28.730 "data_offset": 0, 00:27:28.730 "data_size": 63488 00:27:28.730 }, 00:27:28.730 { 00:27:28.730 "name": "BaseBdev3", 00:27:28.730 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:28.730 "is_configured": true, 00:27:28.730 "data_offset": 2048, 00:27:28.730 "data_size": 63488 00:27:28.730 }, 00:27:28.730 { 00:27:28.730 "name": "BaseBdev4", 00:27:28.730 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:28.730 "is_configured": true, 00:27:28.730 "data_offset": 2048, 00:27:28.730 "data_size": 63488 00:27:28.730 } 00:27:28.730 ] 00:27:28.730 }' 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.730 17:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.308 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.309 [2024-11-26 17:23:59.201729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.309 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.310 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.310 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.310 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.310 "name": "Existed_Raid", 00:27:29.310 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:29.310 "strip_size_kb": 64, 00:27:29.310 "state": "configuring", 00:27:29.310 "raid_level": "concat", 00:27:29.310 "superblock": true, 00:27:29.310 "num_base_bdevs": 4, 00:27:29.310 "num_base_bdevs_discovered": 3, 00:27:29.310 "num_base_bdevs_operational": 4, 00:27:29.310 "base_bdevs_list": [ 00:27:29.310 { 00:27:29.310 "name": null, 00:27:29.310 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:29.310 "is_configured": false, 00:27:29.310 "data_offset": 0, 00:27:29.310 "data_size": 63488 00:27:29.310 }, 00:27:29.310 { 00:27:29.310 "name": "BaseBdev2", 00:27:29.310 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:29.310 "is_configured": true, 00:27:29.310 "data_offset": 2048, 00:27:29.310 "data_size": 63488 00:27:29.310 }, 00:27:29.310 { 00:27:29.310 "name": "BaseBdev3", 00:27:29.310 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:29.310 "is_configured": true, 00:27:29.310 "data_offset": 2048, 00:27:29.310 "data_size": 63488 00:27:29.310 }, 00:27:29.310 { 00:27:29.310 "name": "BaseBdev4", 00:27:29.310 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:29.310 "is_configured": true, 00:27:29.310 "data_offset": 2048, 00:27:29.310 "data_size": 63488 00:27:29.310 } 00:27:29.310 ] 00:27:29.310 }' 00:27:29.310 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.310 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u feba913c-d27b-4bb6-9982-d83924c3d773 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 [2024-11-26 17:23:59.851066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:29.878 [2024-11-26 17:23:59.851318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:29.878 [2024-11-26 17:23:59.851333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:29.878 [2024-11-26 17:23:59.851668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:29.878 [2024-11-26 17:23:59.851821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:29.878 [2024-11-26 17:23:59.851840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:29.878 NewBaseBdev 00:27:29.878 [2024-11-26 17:23:59.852013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 [ 00:27:29.878 { 00:27:29.878 "name": "NewBaseBdev", 00:27:29.878 "aliases": [ 00:27:29.878 "feba913c-d27b-4bb6-9982-d83924c3d773" 00:27:29.878 ], 00:27:29.878 "product_name": "Malloc disk", 00:27:29.878 "block_size": 512, 00:27:29.878 "num_blocks": 65536, 00:27:29.878 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:29.878 "assigned_rate_limits": { 00:27:29.878 "rw_ios_per_sec": 0, 00:27:29.878 "rw_mbytes_per_sec": 0, 00:27:29.878 "r_mbytes_per_sec": 0, 00:27:29.878 "w_mbytes_per_sec": 0 00:27:29.878 }, 00:27:29.878 "claimed": true, 00:27:29.878 "claim_type": "exclusive_write", 00:27:29.878 "zoned": false, 00:27:29.878 "supported_io_types": { 00:27:29.878 "read": true, 00:27:29.878 "write": true, 00:27:29.878 "unmap": true, 00:27:29.878 "flush": true, 00:27:29.878 "reset": true, 00:27:29.878 "nvme_admin": false, 00:27:29.878 "nvme_io": false, 00:27:29.878 "nvme_io_md": false, 00:27:29.878 "write_zeroes": true, 00:27:29.878 "zcopy": true, 00:27:29.878 "get_zone_info": false, 00:27:29.878 "zone_management": false, 00:27:29.878 "zone_append": false, 00:27:29.878 "compare": false, 00:27:29.878 "compare_and_write": false, 00:27:29.878 "abort": true, 00:27:29.878 "seek_hole": false, 00:27:29.878 "seek_data": false, 00:27:29.878 "copy": true, 00:27:29.878 "nvme_iov_md": false 00:27:29.878 }, 00:27:29.878 "memory_domains": [ 00:27:29.878 { 00:27:29.878 "dma_device_id": "system", 00:27:29.878 "dma_device_type": 1 00:27:29.878 }, 00:27:29.878 { 00:27:29.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.878 "dma_device_type": 2 00:27:29.878 } 00:27:29.878 ], 00:27:29.878 "driver_specific": {} 00:27:29.878 } 00:27:29.878 ] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.878 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.878 "name": "Existed_Raid", 00:27:29.878 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:29.878 "strip_size_kb": 64, 00:27:29.878 "state": "online", 00:27:29.878 "raid_level": "concat", 00:27:29.878 "superblock": true, 00:27:29.878 "num_base_bdevs": 4, 00:27:29.878 "num_base_bdevs_discovered": 4, 00:27:29.878 "num_base_bdevs_operational": 4, 00:27:29.878 "base_bdevs_list": [ 00:27:29.878 { 00:27:29.878 "name": "NewBaseBdev", 00:27:29.878 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:29.878 "is_configured": true, 00:27:29.878 "data_offset": 2048, 00:27:29.878 "data_size": 63488 00:27:29.878 }, 00:27:29.878 { 00:27:29.878 "name": "BaseBdev2", 00:27:29.878 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:29.878 "is_configured": true, 00:27:29.878 "data_offset": 2048, 00:27:29.878 "data_size": 63488 00:27:29.878 }, 00:27:29.878 { 00:27:29.878 "name": "BaseBdev3", 00:27:29.878 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:29.878 "is_configured": true, 00:27:29.878 "data_offset": 2048, 00:27:29.878 "data_size": 63488 00:27:29.878 }, 00:27:29.878 { 00:27:29.878 "name": "BaseBdev4", 00:27:29.879 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:29.879 "is_configured": true, 00:27:29.879 "data_offset": 2048, 00:27:29.879 "data_size": 63488 00:27:29.879 } 00:27:29.879 ] 00:27:29.879 }' 00:27:29.879 17:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.879 17:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.446 [2024-11-26 17:24:00.382879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.446 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.446 "name": "Existed_Raid", 00:27:30.446 "aliases": [ 00:27:30.446 "58debaa3-e3ef-45b5-8ff9-978a1bc14f94" 00:27:30.446 ], 00:27:30.446 "product_name": "Raid Volume", 00:27:30.446 "block_size": 512, 00:27:30.446 "num_blocks": 253952, 00:27:30.446 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:30.446 "assigned_rate_limits": { 00:27:30.446 "rw_ios_per_sec": 0, 00:27:30.446 "rw_mbytes_per_sec": 0, 00:27:30.446 "r_mbytes_per_sec": 0, 00:27:30.446 "w_mbytes_per_sec": 0 00:27:30.446 }, 00:27:30.446 "claimed": false, 00:27:30.446 "zoned": false, 00:27:30.446 "supported_io_types": { 00:27:30.446 "read": true, 00:27:30.446 "write": true, 00:27:30.446 "unmap": true, 00:27:30.446 "flush": true, 00:27:30.446 "reset": true, 00:27:30.446 "nvme_admin": false, 00:27:30.446 "nvme_io": false, 00:27:30.446 "nvme_io_md": false, 00:27:30.446 "write_zeroes": true, 00:27:30.446 "zcopy": false, 00:27:30.446 "get_zone_info": false, 00:27:30.446 "zone_management": false, 00:27:30.446 "zone_append": false, 00:27:30.446 "compare": false, 00:27:30.446 "compare_and_write": false, 00:27:30.446 "abort": false, 00:27:30.446 "seek_hole": false, 00:27:30.446 "seek_data": false, 00:27:30.446 "copy": false, 00:27:30.447 "nvme_iov_md": false 00:27:30.447 }, 00:27:30.447 "memory_domains": [ 00:27:30.447 { 00:27:30.447 "dma_device_id": "system", 00:27:30.447 "dma_device_type": 1 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.447 "dma_device_type": 2 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "system", 00:27:30.447 "dma_device_type": 1 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.447 "dma_device_type": 2 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "system", 00:27:30.447 "dma_device_type": 1 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.447 "dma_device_type": 2 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "system", 00:27:30.447 "dma_device_type": 1 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.447 "dma_device_type": 2 00:27:30.447 } 00:27:30.447 ], 00:27:30.447 "driver_specific": { 00:27:30.447 "raid": { 00:27:30.447 "uuid": "58debaa3-e3ef-45b5-8ff9-978a1bc14f94", 00:27:30.447 "strip_size_kb": 64, 00:27:30.447 "state": "online", 00:27:30.447 "raid_level": "concat", 00:27:30.447 "superblock": true, 00:27:30.447 "num_base_bdevs": 4, 00:27:30.447 "num_base_bdevs_discovered": 4, 00:27:30.447 "num_base_bdevs_operational": 4, 00:27:30.447 "base_bdevs_list": [ 00:27:30.447 { 00:27:30.447 "name": "NewBaseBdev", 00:27:30.447 "uuid": "feba913c-d27b-4bb6-9982-d83924c3d773", 00:27:30.447 "is_configured": true, 00:27:30.447 "data_offset": 2048, 00:27:30.447 "data_size": 63488 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "name": "BaseBdev2", 00:27:30.447 "uuid": "07d84cf1-4bdd-4340-a4fa-7ef81d43d0bf", 00:27:30.447 "is_configured": true, 00:27:30.447 "data_offset": 2048, 00:27:30.447 "data_size": 63488 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "name": "BaseBdev3", 00:27:30.447 "uuid": "3b1e23a2-63cb-49f4-abc4-bf8e835852f9", 00:27:30.447 "is_configured": true, 00:27:30.447 "data_offset": 2048, 00:27:30.447 "data_size": 63488 00:27:30.447 }, 00:27:30.447 { 00:27:30.447 "name": "BaseBdev4", 00:27:30.447 "uuid": "2dc45260-8673-4e7b-86cc-0749be02d347", 00:27:30.447 "is_configured": true, 00:27:30.447 "data_offset": 2048, 00:27:30.447 "data_size": 63488 00:27:30.447 } 00:27:30.447 ] 00:27:30.447 } 00:27:30.447 } 00:27:30.447 }' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:30.447 BaseBdev2 00:27:30.447 BaseBdev3 00:27:30.447 BaseBdev4' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.447 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 [2024-11-26 17:24:00.666038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:30.706 [2024-11-26 17:24:00.666079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:30.706 [2024-11-26 17:24:00.666182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.706 [2024-11-26 17:24:00.666264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.706 [2024-11-26 17:24:00.666278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72060 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72060 ']' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72060 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72060 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.706 killing process with pid 72060 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72060' 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72060 00:27:30.706 [2024-11-26 17:24:00.718185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:30.706 17:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72060 00:27:31.273 [2024-11-26 17:24:01.128102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:32.210 17:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:32.210 00:27:32.210 real 0m11.568s 00:27:32.210 user 0m18.289s 00:27:32.210 sys 0m2.447s 00:27:32.210 17:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.210 17:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:32.210 ************************************ 00:27:32.210 END TEST raid_state_function_test_sb 00:27:32.210 ************************************ 00:27:32.468 17:24:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:27:32.468 17:24:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:32.468 17:24:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.468 17:24:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:32.469 ************************************ 00:27:32.469 START TEST raid_superblock_test 00:27:32.469 ************************************ 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72729 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72729 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72729 ']' 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.469 17:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.469 [2024-11-26 17:24:02.505396] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:32.469 [2024-11-26 17:24:02.505570] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72729 ] 00:27:32.728 [2024-11-26 17:24:02.688134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.728 [2024-11-26 17:24:02.835786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.987 [2024-11-26 17:24:03.066224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:32.987 [2024-11-26 17:24:03.066301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.246 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 malloc1 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 [2024-11-26 17:24:03.402296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:33.506 [2024-11-26 17:24:03.402365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.506 [2024-11-26 17:24:03.402392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:33.506 [2024-11-26 17:24:03.402405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.506 [2024-11-26 17:24:03.404968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.506 [2024-11-26 17:24:03.405004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:33.506 pt1 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 malloc2 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 [2024-11-26 17:24:03.461130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:33.506 [2024-11-26 17:24:03.461200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.506 [2024-11-26 17:24:03.461237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:33.506 [2024-11-26 17:24:03.461250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.506 [2024-11-26 17:24:03.464046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.506 [2024-11-26 17:24:03.464085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:33.506 pt2 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 malloc3 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 [2024-11-26 17:24:03.535914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:33.506 [2024-11-26 17:24:03.535980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.506 [2024-11-26 17:24:03.536010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:33.506 [2024-11-26 17:24:03.536024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.506 [2024-11-26 17:24:03.538823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.506 pt3 00:27:33.506 [2024-11-26 17:24:03.538990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 malloc4 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.506 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 [2024-11-26 17:24:03.599479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:33.506 [2024-11-26 17:24:03.599556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.506 [2024-11-26 17:24:03.599583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:33.506 [2024-11-26 17:24:03.599596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.507 [2024-11-26 17:24:03.602244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.507 [2024-11-26 17:24:03.602396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:33.507 pt4 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.507 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.507 [2024-11-26 17:24:03.611534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:33.507 [2024-11-26 17:24:03.613845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.507 [2024-11-26 17:24:03.613942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:33.507 [2024-11-26 17:24:03.613991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:33.507 [2024-11-26 17:24:03.614192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:33.507 [2024-11-26 17:24:03.614205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:33.507 [2024-11-26 17:24:03.614540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:33.507 [2024-11-26 17:24:03.614773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:33.507 [2024-11-26 17:24:03.614788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:33.507 [2024-11-26 17:24:03.614977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.765 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.765 "name": "raid_bdev1", 00:27:33.765 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:33.765 "strip_size_kb": 64, 00:27:33.765 "state": "online", 00:27:33.765 "raid_level": "concat", 00:27:33.765 "superblock": true, 00:27:33.765 "num_base_bdevs": 4, 00:27:33.765 "num_base_bdevs_discovered": 4, 00:27:33.765 "num_base_bdevs_operational": 4, 00:27:33.765 "base_bdevs_list": [ 00:27:33.765 { 00:27:33.765 "name": "pt1", 00:27:33.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:33.765 "is_configured": true, 00:27:33.766 "data_offset": 2048, 00:27:33.766 "data_size": 63488 00:27:33.766 }, 00:27:33.766 { 00:27:33.766 "name": "pt2", 00:27:33.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.766 "is_configured": true, 00:27:33.766 "data_offset": 2048, 00:27:33.766 "data_size": 63488 00:27:33.766 }, 00:27:33.766 { 00:27:33.766 "name": "pt3", 00:27:33.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:33.766 "is_configured": true, 00:27:33.766 "data_offset": 2048, 00:27:33.766 "data_size": 63488 00:27:33.766 }, 00:27:33.766 { 00:27:33.766 "name": "pt4", 00:27:33.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:33.766 "is_configured": true, 00:27:33.766 "data_offset": 2048, 00:27:33.766 "data_size": 63488 00:27:33.766 } 00:27:33.766 ] 00:27:33.766 }' 00:27:33.766 17:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.766 17:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.024 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:34.024 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:34.024 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:34.024 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.025 [2024-11-26 17:24:04.087191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.025 "name": "raid_bdev1", 00:27:34.025 "aliases": [ 00:27:34.025 "3d3a5f19-1934-485a-b3d4-eaf6664d7793" 00:27:34.025 ], 00:27:34.025 "product_name": "Raid Volume", 00:27:34.025 "block_size": 512, 00:27:34.025 "num_blocks": 253952, 00:27:34.025 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:34.025 "assigned_rate_limits": { 00:27:34.025 "rw_ios_per_sec": 0, 00:27:34.025 "rw_mbytes_per_sec": 0, 00:27:34.025 "r_mbytes_per_sec": 0, 00:27:34.025 "w_mbytes_per_sec": 0 00:27:34.025 }, 00:27:34.025 "claimed": false, 00:27:34.025 "zoned": false, 00:27:34.025 "supported_io_types": { 00:27:34.025 "read": true, 00:27:34.025 "write": true, 00:27:34.025 "unmap": true, 00:27:34.025 "flush": true, 00:27:34.025 "reset": true, 00:27:34.025 "nvme_admin": false, 00:27:34.025 "nvme_io": false, 00:27:34.025 "nvme_io_md": false, 00:27:34.025 "write_zeroes": true, 00:27:34.025 "zcopy": false, 00:27:34.025 "get_zone_info": false, 00:27:34.025 "zone_management": false, 00:27:34.025 "zone_append": false, 00:27:34.025 "compare": false, 00:27:34.025 "compare_and_write": false, 00:27:34.025 "abort": false, 00:27:34.025 "seek_hole": false, 00:27:34.025 "seek_data": false, 00:27:34.025 "copy": false, 00:27:34.025 "nvme_iov_md": false 00:27:34.025 }, 00:27:34.025 "memory_domains": [ 00:27:34.025 { 00:27:34.025 "dma_device_id": "system", 00:27:34.025 "dma_device_type": 1 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.025 "dma_device_type": 2 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "system", 00:27:34.025 "dma_device_type": 1 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.025 "dma_device_type": 2 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "system", 00:27:34.025 "dma_device_type": 1 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.025 "dma_device_type": 2 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "system", 00:27:34.025 "dma_device_type": 1 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.025 "dma_device_type": 2 00:27:34.025 } 00:27:34.025 ], 00:27:34.025 "driver_specific": { 00:27:34.025 "raid": { 00:27:34.025 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:34.025 "strip_size_kb": 64, 00:27:34.025 "state": "online", 00:27:34.025 "raid_level": "concat", 00:27:34.025 "superblock": true, 00:27:34.025 "num_base_bdevs": 4, 00:27:34.025 "num_base_bdevs_discovered": 4, 00:27:34.025 "num_base_bdevs_operational": 4, 00:27:34.025 "base_bdevs_list": [ 00:27:34.025 { 00:27:34.025 "name": "pt1", 00:27:34.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:34.025 "is_configured": true, 00:27:34.025 "data_offset": 2048, 00:27:34.025 "data_size": 63488 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "name": "pt2", 00:27:34.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:34.025 "is_configured": true, 00:27:34.025 "data_offset": 2048, 00:27:34.025 "data_size": 63488 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "name": "pt3", 00:27:34.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:34.025 "is_configured": true, 00:27:34.025 "data_offset": 2048, 00:27:34.025 "data_size": 63488 00:27:34.025 }, 00:27:34.025 { 00:27:34.025 "name": "pt4", 00:27:34.025 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:34.025 "is_configured": true, 00:27:34.025 "data_offset": 2048, 00:27:34.025 "data_size": 63488 00:27:34.025 } 00:27:34.025 ] 00:27:34.025 } 00:27:34.025 } 00:27:34.025 }' 00:27:34.025 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:34.284 pt2 00:27:34.284 pt3 00:27:34.284 pt4' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.284 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 [2024-11-26 17:24:04.434744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3d3a5f19-1934-485a-b3d4-eaf6664d7793 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3d3a5f19-1934-485a-b3d4-eaf6664d7793 ']' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 [2024-11-26 17:24:04.482330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.543 [2024-11-26 17:24:04.482469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:34.543 [2024-11-26 17:24:04.482614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.543 [2024-11-26 17:24:04.482709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.543 [2024-11-26 17:24:04.482729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.543 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.803 [2024-11-26 17:24:04.654119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:34.803 [2024-11-26 17:24:04.656581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:34.803 [2024-11-26 17:24:04.656638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:34.803 [2024-11-26 17:24:04.656678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:34.803 [2024-11-26 17:24:04.656739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:34.803 [2024-11-26 17:24:04.656804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:34.803 [2024-11-26 17:24:04.656828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:34.803 [2024-11-26 17:24:04.656852] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:34.803 [2024-11-26 17:24:04.656869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.803 [2024-11-26 17:24:04.656883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:34.803 request: 00:27:34.803 { 00:27:34.803 "name": "raid_bdev1", 00:27:34.803 "raid_level": "concat", 00:27:34.803 "base_bdevs": [ 00:27:34.803 "malloc1", 00:27:34.803 "malloc2", 00:27:34.803 "malloc3", 00:27:34.803 "malloc4" 00:27:34.803 ], 00:27:34.803 "strip_size_kb": 64, 00:27:34.803 "superblock": false, 00:27:34.803 "method": "bdev_raid_create", 00:27:34.803 "req_id": 1 00:27:34.803 } 00:27:34.803 Got JSON-RPC error response 00:27:34.803 response: 00:27:34.803 { 00:27:34.803 "code": -17, 00:27:34.803 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:34.803 } 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.803 [2024-11-26 17:24:04.721974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:34.803 [2024-11-26 17:24:04.722188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.803 [2024-11-26 17:24:04.722244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:34.803 [2024-11-26 17:24:04.722262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.803 [2024-11-26 17:24:04.725119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.803 [2024-11-26 17:24:04.725166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:34.803 [2024-11-26 17:24:04.725272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:34.803 [2024-11-26 17:24:04.725339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:34.803 pt1 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.803 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:34.803 "name": "raid_bdev1", 00:27:34.803 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:34.803 "strip_size_kb": 64, 00:27:34.803 "state": "configuring", 00:27:34.803 "raid_level": "concat", 00:27:34.803 "superblock": true, 00:27:34.803 "num_base_bdevs": 4, 00:27:34.803 "num_base_bdevs_discovered": 1, 00:27:34.803 "num_base_bdevs_operational": 4, 00:27:34.803 "base_bdevs_list": [ 00:27:34.803 { 00:27:34.803 "name": "pt1", 00:27:34.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:34.803 "is_configured": true, 00:27:34.803 "data_offset": 2048, 00:27:34.803 "data_size": 63488 00:27:34.803 }, 00:27:34.803 { 00:27:34.803 "name": null, 00:27:34.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:34.803 "is_configured": false, 00:27:34.803 "data_offset": 2048, 00:27:34.803 "data_size": 63488 00:27:34.803 }, 00:27:34.803 { 00:27:34.803 "name": null, 00:27:34.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:34.803 "is_configured": false, 00:27:34.803 "data_offset": 2048, 00:27:34.803 "data_size": 63488 00:27:34.803 }, 00:27:34.803 { 00:27:34.803 "name": null, 00:27:34.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:34.803 "is_configured": false, 00:27:34.803 "data_offset": 2048, 00:27:34.803 "data_size": 63488 00:27:34.803 } 00:27:34.803 ] 00:27:34.803 }' 00:27:34.804 17:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:34.804 17:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.063 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:27:35.063 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:35.063 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.063 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.063 [2024-11-26 17:24:05.169747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:35.063 [2024-11-26 17:24:05.169990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.063 [2024-11-26 17:24:05.170027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:35.063 [2024-11-26 17:24:05.170043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.063 [2024-11-26 17:24:05.170603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.063 [2024-11-26 17:24:05.170637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:35.063 [2024-11-26 17:24:05.170739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:35.063 [2024-11-26 17:24:05.170768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:35.322 pt2 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.322 [2024-11-26 17:24:05.181710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.322 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.322 "name": "raid_bdev1", 00:27:35.322 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:35.322 "strip_size_kb": 64, 00:27:35.322 "state": "configuring", 00:27:35.322 "raid_level": "concat", 00:27:35.322 "superblock": true, 00:27:35.322 "num_base_bdevs": 4, 00:27:35.322 "num_base_bdevs_discovered": 1, 00:27:35.322 "num_base_bdevs_operational": 4, 00:27:35.322 "base_bdevs_list": [ 00:27:35.322 { 00:27:35.322 "name": "pt1", 00:27:35.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:35.322 "is_configured": true, 00:27:35.322 "data_offset": 2048, 00:27:35.322 "data_size": 63488 00:27:35.322 }, 00:27:35.322 { 00:27:35.323 "name": null, 00:27:35.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:35.323 "is_configured": false, 00:27:35.323 "data_offset": 0, 00:27:35.323 "data_size": 63488 00:27:35.323 }, 00:27:35.323 { 00:27:35.323 "name": null, 00:27:35.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:35.323 "is_configured": false, 00:27:35.323 "data_offset": 2048, 00:27:35.323 "data_size": 63488 00:27:35.323 }, 00:27:35.323 { 00:27:35.323 "name": null, 00:27:35.323 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:35.323 "is_configured": false, 00:27:35.323 "data_offset": 2048, 00:27:35.323 "data_size": 63488 00:27:35.323 } 00:27:35.323 ] 00:27:35.323 }' 00:27:35.323 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.323 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.582 [2024-11-26 17:24:05.605736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:35.582 [2024-11-26 17:24:05.605961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.582 [2024-11-26 17:24:05.606002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:35.582 [2024-11-26 17:24:05.606017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.582 [2024-11-26 17:24:05.606596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.582 [2024-11-26 17:24:05.606639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:35.582 [2024-11-26 17:24:05.606740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:35.582 [2024-11-26 17:24:05.606766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:35.582 pt2 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.582 [2024-11-26 17:24:05.617721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:35.582 [2024-11-26 17:24:05.617791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.582 [2024-11-26 17:24:05.617821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:35.582 [2024-11-26 17:24:05.617834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.582 [2024-11-26 17:24:05.618360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.582 [2024-11-26 17:24:05.618386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:35.582 [2024-11-26 17:24:05.618486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:35.582 [2024-11-26 17:24:05.618534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:35.582 pt3 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.582 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.582 [2024-11-26 17:24:05.629696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:35.582 [2024-11-26 17:24:05.629759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.583 [2024-11-26 17:24:05.629787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:35.583 [2024-11-26 17:24:05.629800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.583 [2024-11-26 17:24:05.630306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.583 [2024-11-26 17:24:05.630325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:35.583 [2024-11-26 17:24:05.630419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:35.583 [2024-11-26 17:24:05.630446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:35.583 [2024-11-26 17:24:05.630654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:35.583 [2024-11-26 17:24:05.630666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:35.583 [2024-11-26 17:24:05.630950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:35.583 [2024-11-26 17:24:05.631114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:35.583 [2024-11-26 17:24:05.631143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:35.583 [2024-11-26 17:24:05.631276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.583 pt4 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.583 "name": "raid_bdev1", 00:27:35.583 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:35.583 "strip_size_kb": 64, 00:27:35.583 "state": "online", 00:27:35.583 "raid_level": "concat", 00:27:35.583 "superblock": true, 00:27:35.583 "num_base_bdevs": 4, 00:27:35.583 "num_base_bdevs_discovered": 4, 00:27:35.583 "num_base_bdevs_operational": 4, 00:27:35.583 "base_bdevs_list": [ 00:27:35.583 { 00:27:35.583 "name": "pt1", 00:27:35.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:35.583 "is_configured": true, 00:27:35.583 "data_offset": 2048, 00:27:35.583 "data_size": 63488 00:27:35.583 }, 00:27:35.583 { 00:27:35.583 "name": "pt2", 00:27:35.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:35.583 "is_configured": true, 00:27:35.583 "data_offset": 2048, 00:27:35.583 "data_size": 63488 00:27:35.583 }, 00:27:35.583 { 00:27:35.583 "name": "pt3", 00:27:35.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:35.583 "is_configured": true, 00:27:35.583 "data_offset": 2048, 00:27:35.583 "data_size": 63488 00:27:35.583 }, 00:27:35.583 { 00:27:35.583 "name": "pt4", 00:27:35.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:35.583 "is_configured": true, 00:27:35.583 "data_offset": 2048, 00:27:35.583 "data_size": 63488 00:27:35.583 } 00:27:35.583 ] 00:27:35.583 }' 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.583 17:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.151 [2024-11-26 17:24:06.090079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:36.151 "name": "raid_bdev1", 00:27:36.151 "aliases": [ 00:27:36.151 "3d3a5f19-1934-485a-b3d4-eaf6664d7793" 00:27:36.151 ], 00:27:36.151 "product_name": "Raid Volume", 00:27:36.151 "block_size": 512, 00:27:36.151 "num_blocks": 253952, 00:27:36.151 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:36.151 "assigned_rate_limits": { 00:27:36.151 "rw_ios_per_sec": 0, 00:27:36.151 "rw_mbytes_per_sec": 0, 00:27:36.151 "r_mbytes_per_sec": 0, 00:27:36.151 "w_mbytes_per_sec": 0 00:27:36.151 }, 00:27:36.151 "claimed": false, 00:27:36.151 "zoned": false, 00:27:36.151 "supported_io_types": { 00:27:36.151 "read": true, 00:27:36.151 "write": true, 00:27:36.151 "unmap": true, 00:27:36.151 "flush": true, 00:27:36.151 "reset": true, 00:27:36.151 "nvme_admin": false, 00:27:36.151 "nvme_io": false, 00:27:36.151 "nvme_io_md": false, 00:27:36.151 "write_zeroes": true, 00:27:36.151 "zcopy": false, 00:27:36.151 "get_zone_info": false, 00:27:36.151 "zone_management": false, 00:27:36.151 "zone_append": false, 00:27:36.151 "compare": false, 00:27:36.151 "compare_and_write": false, 00:27:36.151 "abort": false, 00:27:36.151 "seek_hole": false, 00:27:36.151 "seek_data": false, 00:27:36.151 "copy": false, 00:27:36.151 "nvme_iov_md": false 00:27:36.151 }, 00:27:36.151 "memory_domains": [ 00:27:36.151 { 00:27:36.151 "dma_device_id": "system", 00:27:36.151 "dma_device_type": 1 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.151 "dma_device_type": 2 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "system", 00:27:36.151 "dma_device_type": 1 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.151 "dma_device_type": 2 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "system", 00:27:36.151 "dma_device_type": 1 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.151 "dma_device_type": 2 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "system", 00:27:36.151 "dma_device_type": 1 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.151 "dma_device_type": 2 00:27:36.151 } 00:27:36.151 ], 00:27:36.151 "driver_specific": { 00:27:36.151 "raid": { 00:27:36.151 "uuid": "3d3a5f19-1934-485a-b3d4-eaf6664d7793", 00:27:36.151 "strip_size_kb": 64, 00:27:36.151 "state": "online", 00:27:36.151 "raid_level": "concat", 00:27:36.151 "superblock": true, 00:27:36.151 "num_base_bdevs": 4, 00:27:36.151 "num_base_bdevs_discovered": 4, 00:27:36.151 "num_base_bdevs_operational": 4, 00:27:36.151 "base_bdevs_list": [ 00:27:36.151 { 00:27:36.151 "name": "pt1", 00:27:36.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:36.151 "is_configured": true, 00:27:36.151 "data_offset": 2048, 00:27:36.151 "data_size": 63488 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "name": "pt2", 00:27:36.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:36.151 "is_configured": true, 00:27:36.151 "data_offset": 2048, 00:27:36.151 "data_size": 63488 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "name": "pt3", 00:27:36.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:36.151 "is_configured": true, 00:27:36.151 "data_offset": 2048, 00:27:36.151 "data_size": 63488 00:27:36.151 }, 00:27:36.151 { 00:27:36.151 "name": "pt4", 00:27:36.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:36.151 "is_configured": true, 00:27:36.151 "data_offset": 2048, 00:27:36.151 "data_size": 63488 00:27:36.151 } 00:27:36.151 ] 00:27:36.151 } 00:27:36.151 } 00:27:36.151 }' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:36.151 pt2 00:27:36.151 pt3 00:27:36.151 pt4' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.151 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.410 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.411 [2024-11-26 17:24:06.406062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3d3a5f19-1934-485a-b3d4-eaf6664d7793 '!=' 3d3a5f19-1934-485a-b3d4-eaf6664d7793 ']' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72729 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72729 ']' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72729 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72729 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.411 killing process with pid 72729 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72729' 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72729 00:27:36.411 [2024-11-26 17:24:06.494827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:36.411 17:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72729 00:27:36.411 [2024-11-26 17:24:06.494948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.411 [2024-11-26 17:24:06.495041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.411 [2024-11-26 17:24:06.495054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:36.979 [2024-11-26 17:24:06.921079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:38.356 17:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:38.356 00:27:38.356 real 0m5.728s 00:27:38.356 user 0m8.130s 00:27:38.356 sys 0m1.153s 00:27:38.356 ************************************ 00:27:38.356 END TEST raid_superblock_test 00:27:38.356 ************************************ 00:27:38.356 17:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.356 17:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.356 17:24:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:27:38.356 17:24:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:38.356 17:24:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.356 17:24:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:38.356 ************************************ 00:27:38.356 START TEST raid_read_error_test 00:27:38.356 ************************************ 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:38.356 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Tm1FCJoech 00:27:38.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72995 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72995 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72995 ']' 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.357 17:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.357 [2024-11-26 17:24:08.306353] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:38.357 [2024-11-26 17:24:08.306741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:27:38.618 [2024-11-26 17:24:08.489169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.618 [2024-11-26 17:24:08.631292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.878 [2024-11-26 17:24:08.855844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.878 [2024-11-26 17:24:08.856101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.255 BaseBdev1_malloc 00:27:39.255 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 true 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 [2024-11-26 17:24:09.241392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:39.256 [2024-11-26 17:24:09.241590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.256 [2024-11-26 17:24:09.241671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:39.256 [2024-11-26 17:24:09.241823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.256 [2024-11-26 17:24:09.244613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.256 [2024-11-26 17:24:09.244790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:39.256 BaseBdev1 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 BaseBdev2_malloc 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 true 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 [2024-11-26 17:24:09.315620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:39.256 [2024-11-26 17:24:09.315693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.256 [2024-11-26 17:24:09.315717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:39.256 [2024-11-26 17:24:09.315732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.256 [2024-11-26 17:24:09.318366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.256 [2024-11-26 17:24:09.318549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:39.256 BaseBdev2 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.541 BaseBdev3_malloc 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.541 true 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.541 [2024-11-26 17:24:09.397843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:39.541 [2024-11-26 17:24:09.398162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.541 [2024-11-26 17:24:09.398218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:39.541 [2024-11-26 17:24:09.398237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.541 [2024-11-26 17:24:09.401720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.541 [2024-11-26 17:24:09.401790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:39.541 BaseBdev3 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.541 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.541 BaseBdev4_malloc 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.542 true 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.542 [2024-11-26 17:24:09.470546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:39.542 [2024-11-26 17:24:09.470616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.542 [2024-11-26 17:24:09.470642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:39.542 [2024-11-26 17:24:09.470657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.542 [2024-11-26 17:24:09.473237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.542 [2024-11-26 17:24:09.473409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:39.542 BaseBdev4 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.542 [2024-11-26 17:24:09.482601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.542 [2024-11-26 17:24:09.484860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:39.542 [2024-11-26 17:24:09.484946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:39.542 [2024-11-26 17:24:09.485017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:39.542 [2024-11-26 17:24:09.485267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:39.542 [2024-11-26 17:24:09.485284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:39.542 [2024-11-26 17:24:09.485592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:39.542 [2024-11-26 17:24:09.485775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:39.542 [2024-11-26 17:24:09.485789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:39.542 [2024-11-26 17:24:09.485954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.542 "name": "raid_bdev1", 00:27:39.542 "uuid": "2e8c86c9-5d58-4fcd-b9dc-d82d780a60e2", 00:27:39.542 "strip_size_kb": 64, 00:27:39.542 "state": "online", 00:27:39.542 "raid_level": "concat", 00:27:39.542 "superblock": true, 00:27:39.542 "num_base_bdevs": 4, 00:27:39.542 "num_base_bdevs_discovered": 4, 00:27:39.542 "num_base_bdevs_operational": 4, 00:27:39.542 "base_bdevs_list": [ 00:27:39.542 { 00:27:39.542 "name": "BaseBdev1", 00:27:39.542 "uuid": "0ed2f645-4b73-544c-a725-78cba92c2893", 00:27:39.542 "is_configured": true, 00:27:39.542 "data_offset": 2048, 00:27:39.542 "data_size": 63488 00:27:39.542 }, 00:27:39.542 { 00:27:39.542 "name": "BaseBdev2", 00:27:39.542 "uuid": "23ab8726-ecb2-57d9-a73a-d1d5959f9394", 00:27:39.542 "is_configured": true, 00:27:39.542 "data_offset": 2048, 00:27:39.542 "data_size": 63488 00:27:39.542 }, 00:27:39.542 { 00:27:39.542 "name": "BaseBdev3", 00:27:39.542 "uuid": "824f48c8-8cb4-5a7a-9354-89969f3900e1", 00:27:39.542 "is_configured": true, 00:27:39.542 "data_offset": 2048, 00:27:39.542 "data_size": 63488 00:27:39.542 }, 00:27:39.542 { 00:27:39.542 "name": "BaseBdev4", 00:27:39.542 "uuid": "284e31a0-afe8-578e-9790-ebf0dd6c0b18", 00:27:39.542 "is_configured": true, 00:27:39.542 "data_offset": 2048, 00:27:39.542 "data_size": 63488 00:27:39.542 } 00:27:39.542 ] 00:27:39.542 }' 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.542 17:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.801 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:39.801 17:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:40.061 [2024-11-26 17:24:10.007500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.995 "name": "raid_bdev1", 00:27:40.995 "uuid": "2e8c86c9-5d58-4fcd-b9dc-d82d780a60e2", 00:27:40.995 "strip_size_kb": 64, 00:27:40.995 "state": "online", 00:27:40.995 "raid_level": "concat", 00:27:40.995 "superblock": true, 00:27:40.995 "num_base_bdevs": 4, 00:27:40.995 "num_base_bdevs_discovered": 4, 00:27:40.995 "num_base_bdevs_operational": 4, 00:27:40.995 "base_bdevs_list": [ 00:27:40.995 { 00:27:40.995 "name": "BaseBdev1", 00:27:40.995 "uuid": "0ed2f645-4b73-544c-a725-78cba92c2893", 00:27:40.995 "is_configured": true, 00:27:40.995 "data_offset": 2048, 00:27:40.995 "data_size": 63488 00:27:40.995 }, 00:27:40.995 { 00:27:40.995 "name": "BaseBdev2", 00:27:40.995 "uuid": "23ab8726-ecb2-57d9-a73a-d1d5959f9394", 00:27:40.995 "is_configured": true, 00:27:40.995 "data_offset": 2048, 00:27:40.995 "data_size": 63488 00:27:40.995 }, 00:27:40.995 { 00:27:40.995 "name": "BaseBdev3", 00:27:40.995 "uuid": "824f48c8-8cb4-5a7a-9354-89969f3900e1", 00:27:40.995 "is_configured": true, 00:27:40.995 "data_offset": 2048, 00:27:40.995 "data_size": 63488 00:27:40.995 }, 00:27:40.995 { 00:27:40.995 "name": "BaseBdev4", 00:27:40.995 "uuid": "284e31a0-afe8-578e-9790-ebf0dd6c0b18", 00:27:40.995 "is_configured": true, 00:27:40.995 "data_offset": 2048, 00:27:40.995 "data_size": 63488 00:27:40.995 } 00:27:40.995 ] 00:27:40.995 }' 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.995 17:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.254 [2024-11-26 17:24:11.304206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:41.254 [2024-11-26 17:24:11.304249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:41.254 [2024-11-26 17:24:11.306956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:41.254 [2024-11-26 17:24:11.307031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:41.254 [2024-11-26 17:24:11.307081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:41.254 [2024-11-26 17:24:11.307099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:41.254 { 00:27:41.254 "results": [ 00:27:41.254 { 00:27:41.254 "job": "raid_bdev1", 00:27:41.254 "core_mask": "0x1", 00:27:41.254 "workload": "randrw", 00:27:41.254 "percentage": 50, 00:27:41.254 "status": "finished", 00:27:41.254 "queue_depth": 1, 00:27:41.254 "io_size": 131072, 00:27:41.254 "runtime": 1.29606, 00:27:41.254 "iops": 13896.733175933214, 00:27:41.254 "mibps": 1737.0916469916517, 00:27:41.254 "io_failed": 1, 00:27:41.254 "io_timeout": 0, 00:27:41.254 "avg_latency_us": 100.01573462403913, 00:27:41.254 "min_latency_us": 27.553413654618474, 00:27:41.254 "max_latency_us": 1467.3220883534136 00:27:41.254 } 00:27:41.254 ], 00:27:41.254 "core_count": 1 00:27:41.254 } 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72995 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72995 ']' 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72995 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72995 00:27:41.254 killing process with pid 72995 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72995' 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72995 00:27:41.254 [2024-11-26 17:24:11.359928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:41.254 17:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72995 00:27:41.821 [2024-11-26 17:24:11.710755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:43.199 17:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:43.199 17:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:43.200 17:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Tm1FCJoech 00:27:43.200 17:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:27:43.200 17:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:27:43.200 17:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:43.200 17:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:43.200 17:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:27:43.200 00:27:43.200 real 0m4.814s 00:27:43.200 user 0m5.502s 00:27:43.200 sys 0m0.712s 00:27:43.200 17:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.200 ************************************ 00:27:43.200 END TEST raid_read_error_test 00:27:43.200 ************************************ 00:27:43.200 17:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.200 17:24:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:27:43.200 17:24:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:43.200 17:24:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.200 17:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:43.200 ************************************ 00:27:43.200 START TEST raid_write_error_test 00:27:43.200 ************************************ 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FbDXScjgnu 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73141 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73141 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73141 ']' 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.200 17:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.200 [2024-11-26 17:24:13.202852] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:43.200 [2024-11-26 17:24:13.202999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73141 ] 00:27:43.459 [2024-11-26 17:24:13.386594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.459 [2024-11-26 17:24:13.530505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.717 [2024-11-26 17:24:13.742931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.718 [2024-11-26 17:24:13.742982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.977 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 BaseBdev1_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 true 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 [2024-11-26 17:24:14.115876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:44.237 [2024-11-26 17:24:14.115945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.237 [2024-11-26 17:24:14.115969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:44.237 [2024-11-26 17:24:14.115985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.237 [2024-11-26 17:24:14.118551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.237 [2024-11-26 17:24:14.118592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:44.237 BaseBdev1 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 BaseBdev2_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 true 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 [2024-11-26 17:24:14.182784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:44.237 [2024-11-26 17:24:14.183008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.237 [2024-11-26 17:24:14.183072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:44.237 [2024-11-26 17:24:14.183170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.237 [2024-11-26 17:24:14.186229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.237 [2024-11-26 17:24:14.186413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:44.237 BaseBdev2 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 BaseBdev3_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 true 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 [2024-11-26 17:24:14.266872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:44.237 [2024-11-26 17:24:14.266937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.237 [2024-11-26 17:24:14.266961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:44.237 [2024-11-26 17:24:14.266975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.237 [2024-11-26 17:24:14.269919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.237 [2024-11-26 17:24:14.269963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:44.237 BaseBdev3 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 BaseBdev4_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 true 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.237 [2024-11-26 17:24:14.335251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:44.237 [2024-11-26 17:24:14.335535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.237 [2024-11-26 17:24:14.335574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:44.237 [2024-11-26 17:24:14.335592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.237 [2024-11-26 17:24:14.338362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.237 [2024-11-26 17:24:14.338410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:44.237 BaseBdev4 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.237 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:44.238 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.238 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.497 [2024-11-26 17:24:14.347369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:44.497 [2024-11-26 17:24:14.349622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:44.497 [2024-11-26 17:24:14.349695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:44.497 [2024-11-26 17:24:14.349760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:44.497 [2024-11-26 17:24:14.349989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:44.497 [2024-11-26 17:24:14.350004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:44.497 [2024-11-26 17:24:14.350285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:27:44.497 [2024-11-26 17:24:14.350449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:44.497 [2024-11-26 17:24:14.350462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:44.497 [2024-11-26 17:24:14.350785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.497 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.497 "name": "raid_bdev1", 00:27:44.498 "uuid": "76bcd5ac-0f7c-4926-9651-dc671b6ae4c8", 00:27:44.498 "strip_size_kb": 64, 00:27:44.498 "state": "online", 00:27:44.498 "raid_level": "concat", 00:27:44.498 "superblock": true, 00:27:44.498 "num_base_bdevs": 4, 00:27:44.498 "num_base_bdevs_discovered": 4, 00:27:44.498 "num_base_bdevs_operational": 4, 00:27:44.498 "base_bdevs_list": [ 00:27:44.498 { 00:27:44.498 "name": "BaseBdev1", 00:27:44.498 "uuid": "5eeba275-5f52-596f-bc78-bcc8c504b321", 00:27:44.498 "is_configured": true, 00:27:44.498 "data_offset": 2048, 00:27:44.498 "data_size": 63488 00:27:44.498 }, 00:27:44.498 { 00:27:44.498 "name": "BaseBdev2", 00:27:44.498 "uuid": "b8f34124-0323-523b-ab91-a14c37b426aa", 00:27:44.498 "is_configured": true, 00:27:44.498 "data_offset": 2048, 00:27:44.498 "data_size": 63488 00:27:44.498 }, 00:27:44.498 { 00:27:44.498 "name": "BaseBdev3", 00:27:44.498 "uuid": "bc7a8e48-ae09-51f3-99bb-97fc5b7025b5", 00:27:44.498 "is_configured": true, 00:27:44.498 "data_offset": 2048, 00:27:44.498 "data_size": 63488 00:27:44.498 }, 00:27:44.498 { 00:27:44.498 "name": "BaseBdev4", 00:27:44.498 "uuid": "5d6e55d6-c1f4-54bc-baac-0e29a1010765", 00:27:44.498 "is_configured": true, 00:27:44.498 "data_offset": 2048, 00:27:44.498 "data_size": 63488 00:27:44.498 } 00:27:44.498 ] 00:27:44.498 }' 00:27:44.498 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.498 17:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.757 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:44.757 17:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:45.016 [2024-11-26 17:24:14.872132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.953 "name": "raid_bdev1", 00:27:45.953 "uuid": "76bcd5ac-0f7c-4926-9651-dc671b6ae4c8", 00:27:45.953 "strip_size_kb": 64, 00:27:45.953 "state": "online", 00:27:45.953 "raid_level": "concat", 00:27:45.953 "superblock": true, 00:27:45.953 "num_base_bdevs": 4, 00:27:45.953 "num_base_bdevs_discovered": 4, 00:27:45.953 "num_base_bdevs_operational": 4, 00:27:45.953 "base_bdevs_list": [ 00:27:45.953 { 00:27:45.953 "name": "BaseBdev1", 00:27:45.953 "uuid": "5eeba275-5f52-596f-bc78-bcc8c504b321", 00:27:45.953 "is_configured": true, 00:27:45.953 "data_offset": 2048, 00:27:45.953 "data_size": 63488 00:27:45.953 }, 00:27:45.953 { 00:27:45.953 "name": "BaseBdev2", 00:27:45.953 "uuid": "b8f34124-0323-523b-ab91-a14c37b426aa", 00:27:45.953 "is_configured": true, 00:27:45.953 "data_offset": 2048, 00:27:45.953 "data_size": 63488 00:27:45.953 }, 00:27:45.953 { 00:27:45.953 "name": "BaseBdev3", 00:27:45.953 "uuid": "bc7a8e48-ae09-51f3-99bb-97fc5b7025b5", 00:27:45.953 "is_configured": true, 00:27:45.953 "data_offset": 2048, 00:27:45.953 "data_size": 63488 00:27:45.953 }, 00:27:45.953 { 00:27:45.953 "name": "BaseBdev4", 00:27:45.953 "uuid": "5d6e55d6-c1f4-54bc-baac-0e29a1010765", 00:27:45.953 "is_configured": true, 00:27:45.953 "data_offset": 2048, 00:27:45.953 "data_size": 63488 00:27:45.953 } 00:27:45.953 ] 00:27:45.953 }' 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.953 17:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.212 [2024-11-26 17:24:16.229329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:46.212 [2024-11-26 17:24:16.229372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:46.212 [2024-11-26 17:24:16.232163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:46.212 [2024-11-26 17:24:16.232239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.212 [2024-11-26 17:24:16.232290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:46.212 [2024-11-26 17:24:16.232305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:46.212 { 00:27:46.212 "results": [ 00:27:46.212 { 00:27:46.212 "job": "raid_bdev1", 00:27:46.212 "core_mask": "0x1", 00:27:46.212 "workload": "randrw", 00:27:46.212 "percentage": 50, 00:27:46.212 "status": "finished", 00:27:46.212 "queue_depth": 1, 00:27:46.212 "io_size": 131072, 00:27:46.212 "runtime": 1.356986, 00:27:46.212 "iops": 14545.470623867895, 00:27:46.212 "mibps": 1818.183827983487, 00:27:46.212 "io_failed": 1, 00:27:46.212 "io_timeout": 0, 00:27:46.212 "avg_latency_us": 95.51001110679101, 00:27:46.212 "min_latency_us": 26.730923694779115, 00:27:46.212 "max_latency_us": 1552.8610441767069 00:27:46.212 } 00:27:46.212 ], 00:27:46.212 "core_count": 1 00:27:46.212 } 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73141 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73141 ']' 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73141 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73141 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:46.212 killing process with pid 73141 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73141' 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73141 00:27:46.212 [2024-11-26 17:24:16.273089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:46.212 17:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73141 00:27:46.778 [2024-11-26 17:24:16.636242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FbDXScjgnu 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:27:48.155 ************************************ 00:27:48.155 END TEST raid_write_error_test 00:27:48.155 ************************************ 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:27:48.155 00:27:48.155 real 0m4.899s 00:27:48.155 user 0m5.620s 00:27:48.155 sys 0m0.668s 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.155 17:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.155 17:24:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:48.155 17:24:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:27:48.155 17:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:48.155 17:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.155 17:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:48.155 ************************************ 00:27:48.155 START TEST raid_state_function_test 00:27:48.155 ************************************ 00:27:48.155 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:27:48.155 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73289 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73289' 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:48.156 Process raid pid: 73289 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73289 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73289 ']' 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.156 17:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.156 [2024-11-26 17:24:18.174327] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:27:48.156 [2024-11-26 17:24:18.174686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.415 [2024-11-26 17:24:18.360686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.415 [2024-11-26 17:24:18.514855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.675 [2024-11-26 17:24:18.757575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.675 [2024-11-26 17:24:18.757867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.248 [2024-11-26 17:24:19.107843] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.248 [2024-11-26 17:24:19.107916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.248 [2024-11-26 17:24:19.107930] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.248 [2024-11-26 17:24:19.107945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.248 [2024-11-26 17:24:19.107954] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.248 [2024-11-26 17:24:19.107968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.248 [2024-11-26 17:24:19.107984] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.248 [2024-11-26 17:24:19.107997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.248 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.249 "name": "Existed_Raid", 00:27:49.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.249 "strip_size_kb": 0, 00:27:49.249 "state": "configuring", 00:27:49.249 "raid_level": "raid1", 00:27:49.249 "superblock": false, 00:27:49.249 "num_base_bdevs": 4, 00:27:49.249 "num_base_bdevs_discovered": 0, 00:27:49.249 "num_base_bdevs_operational": 4, 00:27:49.249 "base_bdevs_list": [ 00:27:49.249 { 00:27:49.249 "name": "BaseBdev1", 00:27:49.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.249 "is_configured": false, 00:27:49.249 "data_offset": 0, 00:27:49.249 "data_size": 0 00:27:49.249 }, 00:27:49.249 { 00:27:49.249 "name": "BaseBdev2", 00:27:49.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.249 "is_configured": false, 00:27:49.249 "data_offset": 0, 00:27:49.249 "data_size": 0 00:27:49.249 }, 00:27:49.249 { 00:27:49.249 "name": "BaseBdev3", 00:27:49.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.249 "is_configured": false, 00:27:49.249 "data_offset": 0, 00:27:49.249 "data_size": 0 00:27:49.249 }, 00:27:49.249 { 00:27:49.249 "name": "BaseBdev4", 00:27:49.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.249 "is_configured": false, 00:27:49.249 "data_offset": 0, 00:27:49.249 "data_size": 0 00:27:49.249 } 00:27:49.249 ] 00:27:49.249 }' 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.249 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 [2024-11-26 17:24:19.555216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:49.510 [2024-11-26 17:24:19.555263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 [2024-11-26 17:24:19.563200] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.510 [2024-11-26 17:24:19.563255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.510 [2024-11-26 17:24:19.563267] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.510 [2024-11-26 17:24:19.563280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.510 [2024-11-26 17:24:19.563288] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.510 [2024-11-26 17:24:19.563300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.510 [2024-11-26 17:24:19.563308] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.510 [2024-11-26 17:24:19.563320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 [2024-11-26 17:24:19.612884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.510 BaseBdev1 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.510 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.769 [ 00:27:49.769 { 00:27:49.769 "name": "BaseBdev1", 00:27:49.769 "aliases": [ 00:27:49.769 "8fde46a9-778e-4834-99d2-6d9ccef6df2f" 00:27:49.769 ], 00:27:49.769 "product_name": "Malloc disk", 00:27:49.769 "block_size": 512, 00:27:49.769 "num_blocks": 65536, 00:27:49.769 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:49.769 "assigned_rate_limits": { 00:27:49.769 "rw_ios_per_sec": 0, 00:27:49.769 "rw_mbytes_per_sec": 0, 00:27:49.769 "r_mbytes_per_sec": 0, 00:27:49.769 "w_mbytes_per_sec": 0 00:27:49.769 }, 00:27:49.769 "claimed": true, 00:27:49.769 "claim_type": "exclusive_write", 00:27:49.769 "zoned": false, 00:27:49.769 "supported_io_types": { 00:27:49.769 "read": true, 00:27:49.769 "write": true, 00:27:49.769 "unmap": true, 00:27:49.769 "flush": true, 00:27:49.769 "reset": true, 00:27:49.769 "nvme_admin": false, 00:27:49.769 "nvme_io": false, 00:27:49.769 "nvme_io_md": false, 00:27:49.769 "write_zeroes": true, 00:27:49.769 "zcopy": true, 00:27:49.769 "get_zone_info": false, 00:27:49.769 "zone_management": false, 00:27:49.769 "zone_append": false, 00:27:49.769 "compare": false, 00:27:49.769 "compare_and_write": false, 00:27:49.769 "abort": true, 00:27:49.769 "seek_hole": false, 00:27:49.769 "seek_data": false, 00:27:49.769 "copy": true, 00:27:49.769 "nvme_iov_md": false 00:27:49.769 }, 00:27:49.769 "memory_domains": [ 00:27:49.769 { 00:27:49.769 "dma_device_id": "system", 00:27:49.769 "dma_device_type": 1 00:27:49.769 }, 00:27:49.769 { 00:27:49.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.769 "dma_device_type": 2 00:27:49.769 } 00:27:49.769 ], 00:27:49.769 "driver_specific": {} 00:27:49.769 } 00:27:49.769 ] 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.769 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.770 "name": "Existed_Raid", 00:27:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.770 "strip_size_kb": 0, 00:27:49.770 "state": "configuring", 00:27:49.770 "raid_level": "raid1", 00:27:49.770 "superblock": false, 00:27:49.770 "num_base_bdevs": 4, 00:27:49.770 "num_base_bdevs_discovered": 1, 00:27:49.770 "num_base_bdevs_operational": 4, 00:27:49.770 "base_bdevs_list": [ 00:27:49.770 { 00:27:49.770 "name": "BaseBdev1", 00:27:49.770 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:49.770 "is_configured": true, 00:27:49.770 "data_offset": 0, 00:27:49.770 "data_size": 65536 00:27:49.770 }, 00:27:49.770 { 00:27:49.770 "name": "BaseBdev2", 00:27:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.770 "is_configured": false, 00:27:49.770 "data_offset": 0, 00:27:49.770 "data_size": 0 00:27:49.770 }, 00:27:49.770 { 00:27:49.770 "name": "BaseBdev3", 00:27:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.770 "is_configured": false, 00:27:49.770 "data_offset": 0, 00:27:49.770 "data_size": 0 00:27:49.770 }, 00:27:49.770 { 00:27:49.770 "name": "BaseBdev4", 00:27:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.770 "is_configured": false, 00:27:49.770 "data_offset": 0, 00:27:49.770 "data_size": 0 00:27:49.770 } 00:27:49.770 ] 00:27:49.770 }' 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.770 17:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.059 [2024-11-26 17:24:20.092324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:50.059 [2024-11-26 17:24:20.092393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.059 [2024-11-26 17:24:20.104345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:50.059 [2024-11-26 17:24:20.106836] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:50.059 [2024-11-26 17:24:20.107014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:50.059 [2024-11-26 17:24:20.107037] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:50.059 [2024-11-26 17:24:20.107055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:50.059 [2024-11-26 17:24:20.107064] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:50.059 [2024-11-26 17:24:20.107076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.059 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.318 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.318 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.318 "name": "Existed_Raid", 00:27:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.318 "strip_size_kb": 0, 00:27:50.318 "state": "configuring", 00:27:50.318 "raid_level": "raid1", 00:27:50.318 "superblock": false, 00:27:50.318 "num_base_bdevs": 4, 00:27:50.318 "num_base_bdevs_discovered": 1, 00:27:50.318 "num_base_bdevs_operational": 4, 00:27:50.318 "base_bdevs_list": [ 00:27:50.318 { 00:27:50.318 "name": "BaseBdev1", 00:27:50.318 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:50.318 "is_configured": true, 00:27:50.318 "data_offset": 0, 00:27:50.318 "data_size": 65536 00:27:50.318 }, 00:27:50.318 { 00:27:50.318 "name": "BaseBdev2", 00:27:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.318 "is_configured": false, 00:27:50.318 "data_offset": 0, 00:27:50.318 "data_size": 0 00:27:50.318 }, 00:27:50.318 { 00:27:50.318 "name": "BaseBdev3", 00:27:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.318 "is_configured": false, 00:27:50.318 "data_offset": 0, 00:27:50.318 "data_size": 0 00:27:50.318 }, 00:27:50.318 { 00:27:50.318 "name": "BaseBdev4", 00:27:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.318 "is_configured": false, 00:27:50.318 "data_offset": 0, 00:27:50.318 "data_size": 0 00:27:50.318 } 00:27:50.318 ] 00:27:50.318 }' 00:27:50.318 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.318 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 [2024-11-26 17:24:20.594338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:50.577 BaseBdev2 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 [ 00:27:50.577 { 00:27:50.577 "name": "BaseBdev2", 00:27:50.577 "aliases": [ 00:27:50.577 "299cc96d-82be-4707-a8d9-84c243d692a4" 00:27:50.577 ], 00:27:50.577 "product_name": "Malloc disk", 00:27:50.577 "block_size": 512, 00:27:50.577 "num_blocks": 65536, 00:27:50.577 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:50.577 "assigned_rate_limits": { 00:27:50.577 "rw_ios_per_sec": 0, 00:27:50.577 "rw_mbytes_per_sec": 0, 00:27:50.577 "r_mbytes_per_sec": 0, 00:27:50.577 "w_mbytes_per_sec": 0 00:27:50.577 }, 00:27:50.577 "claimed": true, 00:27:50.577 "claim_type": "exclusive_write", 00:27:50.577 "zoned": false, 00:27:50.577 "supported_io_types": { 00:27:50.577 "read": true, 00:27:50.577 "write": true, 00:27:50.577 "unmap": true, 00:27:50.577 "flush": true, 00:27:50.577 "reset": true, 00:27:50.577 "nvme_admin": false, 00:27:50.577 "nvme_io": false, 00:27:50.577 "nvme_io_md": false, 00:27:50.577 "write_zeroes": true, 00:27:50.577 "zcopy": true, 00:27:50.577 "get_zone_info": false, 00:27:50.577 "zone_management": false, 00:27:50.577 "zone_append": false, 00:27:50.577 "compare": false, 00:27:50.577 "compare_and_write": false, 00:27:50.577 "abort": true, 00:27:50.577 "seek_hole": false, 00:27:50.577 "seek_data": false, 00:27:50.577 "copy": true, 00:27:50.577 "nvme_iov_md": false 00:27:50.577 }, 00:27:50.577 "memory_domains": [ 00:27:50.577 { 00:27:50.577 "dma_device_id": "system", 00:27:50.577 "dma_device_type": 1 00:27:50.577 }, 00:27:50.577 { 00:27:50.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.577 "dma_device_type": 2 00:27:50.577 } 00:27:50.577 ], 00:27:50.577 "driver_specific": {} 00:27:50.577 } 00:27:50.577 ] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.577 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.577 "name": "Existed_Raid", 00:27:50.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.577 "strip_size_kb": 0, 00:27:50.577 "state": "configuring", 00:27:50.577 "raid_level": "raid1", 00:27:50.577 "superblock": false, 00:27:50.577 "num_base_bdevs": 4, 00:27:50.577 "num_base_bdevs_discovered": 2, 00:27:50.577 "num_base_bdevs_operational": 4, 00:27:50.577 "base_bdevs_list": [ 00:27:50.577 { 00:27:50.577 "name": "BaseBdev1", 00:27:50.577 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:50.577 "is_configured": true, 00:27:50.577 "data_offset": 0, 00:27:50.577 "data_size": 65536 00:27:50.577 }, 00:27:50.577 { 00:27:50.577 "name": "BaseBdev2", 00:27:50.578 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:50.578 "is_configured": true, 00:27:50.578 "data_offset": 0, 00:27:50.578 "data_size": 65536 00:27:50.578 }, 00:27:50.578 { 00:27:50.578 "name": "BaseBdev3", 00:27:50.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.578 "is_configured": false, 00:27:50.578 "data_offset": 0, 00:27:50.578 "data_size": 0 00:27:50.578 }, 00:27:50.578 { 00:27:50.578 "name": "BaseBdev4", 00:27:50.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.578 "is_configured": false, 00:27:50.578 "data_offset": 0, 00:27:50.578 "data_size": 0 00:27:50.578 } 00:27:50.578 ] 00:27:50.578 }' 00:27:50.578 17:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.578 17:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 [2024-11-26 17:24:21.109991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:51.144 BaseBdev3 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 [ 00:27:51.144 { 00:27:51.144 "name": "BaseBdev3", 00:27:51.144 "aliases": [ 00:27:51.144 "8215e59b-9684-4b8a-979d-837ebfdebb8b" 00:27:51.144 ], 00:27:51.144 "product_name": "Malloc disk", 00:27:51.144 "block_size": 512, 00:27:51.144 "num_blocks": 65536, 00:27:51.144 "uuid": "8215e59b-9684-4b8a-979d-837ebfdebb8b", 00:27:51.144 "assigned_rate_limits": { 00:27:51.144 "rw_ios_per_sec": 0, 00:27:51.144 "rw_mbytes_per_sec": 0, 00:27:51.144 "r_mbytes_per_sec": 0, 00:27:51.144 "w_mbytes_per_sec": 0 00:27:51.144 }, 00:27:51.144 "claimed": true, 00:27:51.144 "claim_type": "exclusive_write", 00:27:51.144 "zoned": false, 00:27:51.144 "supported_io_types": { 00:27:51.144 "read": true, 00:27:51.144 "write": true, 00:27:51.144 "unmap": true, 00:27:51.144 "flush": true, 00:27:51.144 "reset": true, 00:27:51.144 "nvme_admin": false, 00:27:51.145 "nvme_io": false, 00:27:51.145 "nvme_io_md": false, 00:27:51.145 "write_zeroes": true, 00:27:51.145 "zcopy": true, 00:27:51.145 "get_zone_info": false, 00:27:51.145 "zone_management": false, 00:27:51.145 "zone_append": false, 00:27:51.145 "compare": false, 00:27:51.145 "compare_and_write": false, 00:27:51.145 "abort": true, 00:27:51.145 "seek_hole": false, 00:27:51.145 "seek_data": false, 00:27:51.145 "copy": true, 00:27:51.145 "nvme_iov_md": false 00:27:51.145 }, 00:27:51.145 "memory_domains": [ 00:27:51.145 { 00:27:51.145 "dma_device_id": "system", 00:27:51.145 "dma_device_type": 1 00:27:51.145 }, 00:27:51.145 { 00:27:51.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.145 "dma_device_type": 2 00:27:51.145 } 00:27:51.145 ], 00:27:51.145 "driver_specific": {} 00:27:51.145 } 00:27:51.145 ] 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.145 "name": "Existed_Raid", 00:27:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.145 "strip_size_kb": 0, 00:27:51.145 "state": "configuring", 00:27:51.145 "raid_level": "raid1", 00:27:51.145 "superblock": false, 00:27:51.145 "num_base_bdevs": 4, 00:27:51.145 "num_base_bdevs_discovered": 3, 00:27:51.145 "num_base_bdevs_operational": 4, 00:27:51.145 "base_bdevs_list": [ 00:27:51.145 { 00:27:51.145 "name": "BaseBdev1", 00:27:51.145 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:51.145 "is_configured": true, 00:27:51.145 "data_offset": 0, 00:27:51.145 "data_size": 65536 00:27:51.145 }, 00:27:51.145 { 00:27:51.145 "name": "BaseBdev2", 00:27:51.145 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:51.145 "is_configured": true, 00:27:51.145 "data_offset": 0, 00:27:51.145 "data_size": 65536 00:27:51.145 }, 00:27:51.145 { 00:27:51.145 "name": "BaseBdev3", 00:27:51.145 "uuid": "8215e59b-9684-4b8a-979d-837ebfdebb8b", 00:27:51.145 "is_configured": true, 00:27:51.145 "data_offset": 0, 00:27:51.145 "data_size": 65536 00:27:51.145 }, 00:27:51.145 { 00:27:51.145 "name": "BaseBdev4", 00:27:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.145 "is_configured": false, 00:27:51.145 "data_offset": 0, 00:27:51.145 "data_size": 0 00:27:51.145 } 00:27:51.145 ] 00:27:51.145 }' 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.145 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.712 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:51.712 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.712 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.712 [2024-11-26 17:24:21.621453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:51.712 [2024-11-26 17:24:21.621848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:51.712 [2024-11-26 17:24:21.621873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:51.713 [2024-11-26 17:24:21.622238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:51.713 [2024-11-26 17:24:21.622460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:51.713 [2024-11-26 17:24:21.622477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:51.713 [2024-11-26 17:24:21.622838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.713 BaseBdev4 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.713 [ 00:27:51.713 { 00:27:51.713 "name": "BaseBdev4", 00:27:51.713 "aliases": [ 00:27:51.713 "e8f8d168-e873-4c81-8724-f096988cb0d7" 00:27:51.713 ], 00:27:51.713 "product_name": "Malloc disk", 00:27:51.713 "block_size": 512, 00:27:51.713 "num_blocks": 65536, 00:27:51.713 "uuid": "e8f8d168-e873-4c81-8724-f096988cb0d7", 00:27:51.713 "assigned_rate_limits": { 00:27:51.713 "rw_ios_per_sec": 0, 00:27:51.713 "rw_mbytes_per_sec": 0, 00:27:51.713 "r_mbytes_per_sec": 0, 00:27:51.713 "w_mbytes_per_sec": 0 00:27:51.713 }, 00:27:51.713 "claimed": true, 00:27:51.713 "claim_type": "exclusive_write", 00:27:51.713 "zoned": false, 00:27:51.713 "supported_io_types": { 00:27:51.713 "read": true, 00:27:51.713 "write": true, 00:27:51.713 "unmap": true, 00:27:51.713 "flush": true, 00:27:51.713 "reset": true, 00:27:51.713 "nvme_admin": false, 00:27:51.713 "nvme_io": false, 00:27:51.713 "nvme_io_md": false, 00:27:51.713 "write_zeroes": true, 00:27:51.713 "zcopy": true, 00:27:51.713 "get_zone_info": false, 00:27:51.713 "zone_management": false, 00:27:51.713 "zone_append": false, 00:27:51.713 "compare": false, 00:27:51.713 "compare_and_write": false, 00:27:51.713 "abort": true, 00:27:51.713 "seek_hole": false, 00:27:51.713 "seek_data": false, 00:27:51.713 "copy": true, 00:27:51.713 "nvme_iov_md": false 00:27:51.713 }, 00:27:51.713 "memory_domains": [ 00:27:51.713 { 00:27:51.713 "dma_device_id": "system", 00:27:51.713 "dma_device_type": 1 00:27:51.713 }, 00:27:51.713 { 00:27:51.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.713 "dma_device_type": 2 00:27:51.713 } 00:27:51.713 ], 00:27:51.713 "driver_specific": {} 00:27:51.713 } 00:27:51.713 ] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.713 "name": "Existed_Raid", 00:27:51.713 "uuid": "7ad1294d-f3e7-412f-8e7d-7235914dedb0", 00:27:51.713 "strip_size_kb": 0, 00:27:51.713 "state": "online", 00:27:51.713 "raid_level": "raid1", 00:27:51.713 "superblock": false, 00:27:51.713 "num_base_bdevs": 4, 00:27:51.713 "num_base_bdevs_discovered": 4, 00:27:51.713 "num_base_bdevs_operational": 4, 00:27:51.713 "base_bdevs_list": [ 00:27:51.713 { 00:27:51.713 "name": "BaseBdev1", 00:27:51.713 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:51.713 "is_configured": true, 00:27:51.713 "data_offset": 0, 00:27:51.713 "data_size": 65536 00:27:51.713 }, 00:27:51.713 { 00:27:51.713 "name": "BaseBdev2", 00:27:51.713 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:51.713 "is_configured": true, 00:27:51.713 "data_offset": 0, 00:27:51.713 "data_size": 65536 00:27:51.713 }, 00:27:51.713 { 00:27:51.713 "name": "BaseBdev3", 00:27:51.713 "uuid": "8215e59b-9684-4b8a-979d-837ebfdebb8b", 00:27:51.713 "is_configured": true, 00:27:51.713 "data_offset": 0, 00:27:51.713 "data_size": 65536 00:27:51.713 }, 00:27:51.713 { 00:27:51.713 "name": "BaseBdev4", 00:27:51.713 "uuid": "e8f8d168-e873-4c81-8724-f096988cb0d7", 00:27:51.713 "is_configured": true, 00:27:51.713 "data_offset": 0, 00:27:51.713 "data_size": 65536 00:27:51.713 } 00:27:51.713 ] 00:27:51.713 }' 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.713 17:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:52.279 [2024-11-26 17:24:22.141297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:52.279 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.280 "name": "Existed_Raid", 00:27:52.280 "aliases": [ 00:27:52.280 "7ad1294d-f3e7-412f-8e7d-7235914dedb0" 00:27:52.280 ], 00:27:52.280 "product_name": "Raid Volume", 00:27:52.280 "block_size": 512, 00:27:52.280 "num_blocks": 65536, 00:27:52.280 "uuid": "7ad1294d-f3e7-412f-8e7d-7235914dedb0", 00:27:52.280 "assigned_rate_limits": { 00:27:52.280 "rw_ios_per_sec": 0, 00:27:52.280 "rw_mbytes_per_sec": 0, 00:27:52.280 "r_mbytes_per_sec": 0, 00:27:52.280 "w_mbytes_per_sec": 0 00:27:52.280 }, 00:27:52.280 "claimed": false, 00:27:52.280 "zoned": false, 00:27:52.280 "supported_io_types": { 00:27:52.280 "read": true, 00:27:52.280 "write": true, 00:27:52.280 "unmap": false, 00:27:52.280 "flush": false, 00:27:52.280 "reset": true, 00:27:52.280 "nvme_admin": false, 00:27:52.280 "nvme_io": false, 00:27:52.280 "nvme_io_md": false, 00:27:52.280 "write_zeroes": true, 00:27:52.280 "zcopy": false, 00:27:52.280 "get_zone_info": false, 00:27:52.280 "zone_management": false, 00:27:52.280 "zone_append": false, 00:27:52.280 "compare": false, 00:27:52.280 "compare_and_write": false, 00:27:52.280 "abort": false, 00:27:52.280 "seek_hole": false, 00:27:52.280 "seek_data": false, 00:27:52.280 "copy": false, 00:27:52.280 "nvme_iov_md": false 00:27:52.280 }, 00:27:52.280 "memory_domains": [ 00:27:52.280 { 00:27:52.280 "dma_device_id": "system", 00:27:52.280 "dma_device_type": 1 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.280 "dma_device_type": 2 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "system", 00:27:52.280 "dma_device_type": 1 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.280 "dma_device_type": 2 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "system", 00:27:52.280 "dma_device_type": 1 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.280 "dma_device_type": 2 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "system", 00:27:52.280 "dma_device_type": 1 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.280 "dma_device_type": 2 00:27:52.280 } 00:27:52.280 ], 00:27:52.280 "driver_specific": { 00:27:52.280 "raid": { 00:27:52.280 "uuid": "7ad1294d-f3e7-412f-8e7d-7235914dedb0", 00:27:52.280 "strip_size_kb": 0, 00:27:52.280 "state": "online", 00:27:52.280 "raid_level": "raid1", 00:27:52.280 "superblock": false, 00:27:52.280 "num_base_bdevs": 4, 00:27:52.280 "num_base_bdevs_discovered": 4, 00:27:52.280 "num_base_bdevs_operational": 4, 00:27:52.280 "base_bdevs_list": [ 00:27:52.280 { 00:27:52.280 "name": "BaseBdev1", 00:27:52.280 "uuid": "8fde46a9-778e-4834-99d2-6d9ccef6df2f", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 0, 00:27:52.280 "data_size": 65536 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "name": "BaseBdev2", 00:27:52.280 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 0, 00:27:52.280 "data_size": 65536 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "name": "BaseBdev3", 00:27:52.280 "uuid": "8215e59b-9684-4b8a-979d-837ebfdebb8b", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 0, 00:27:52.280 "data_size": 65536 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "name": "BaseBdev4", 00:27:52.280 "uuid": "e8f8d168-e873-4c81-8724-f096988cb0d7", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 0, 00:27:52.280 "data_size": 65536 00:27:52.280 } 00:27:52.280 ] 00:27:52.280 } 00:27:52.280 } 00:27:52.280 }' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:52.280 BaseBdev2 00:27:52.280 BaseBdev3 00:27:52.280 BaseBdev4' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:52.280 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:52.281 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:52.281 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.281 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.281 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:52.281 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.539 [2024-11-26 17:24:22.468561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.539 "name": "Existed_Raid", 00:27:52.539 "uuid": "7ad1294d-f3e7-412f-8e7d-7235914dedb0", 00:27:52.539 "strip_size_kb": 0, 00:27:52.539 "state": "online", 00:27:52.539 "raid_level": "raid1", 00:27:52.539 "superblock": false, 00:27:52.539 "num_base_bdevs": 4, 00:27:52.539 "num_base_bdevs_discovered": 3, 00:27:52.539 "num_base_bdevs_operational": 3, 00:27:52.539 "base_bdevs_list": [ 00:27:52.539 { 00:27:52.539 "name": null, 00:27:52.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.539 "is_configured": false, 00:27:52.539 "data_offset": 0, 00:27:52.539 "data_size": 65536 00:27:52.539 }, 00:27:52.539 { 00:27:52.539 "name": "BaseBdev2", 00:27:52.539 "uuid": "299cc96d-82be-4707-a8d9-84c243d692a4", 00:27:52.539 "is_configured": true, 00:27:52.539 "data_offset": 0, 00:27:52.539 "data_size": 65536 00:27:52.539 }, 00:27:52.539 { 00:27:52.539 "name": "BaseBdev3", 00:27:52.539 "uuid": "8215e59b-9684-4b8a-979d-837ebfdebb8b", 00:27:52.539 "is_configured": true, 00:27:52.539 "data_offset": 0, 00:27:52.539 "data_size": 65536 00:27:52.539 }, 00:27:52.539 { 00:27:52.539 "name": "BaseBdev4", 00:27:52.539 "uuid": "e8f8d168-e873-4c81-8724-f096988cb0d7", 00:27:52.539 "is_configured": true, 00:27:52.539 "data_offset": 0, 00:27:52.539 "data_size": 65536 00:27:52.539 } 00:27:52.539 ] 00:27:52.539 }' 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.539 17:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.105 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.105 [2024-11-26 17:24:23.077821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.106 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.364 [2024-11-26 17:24:23.233966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.364 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.364 [2024-11-26 17:24:23.390218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:53.364 [2024-11-26 17:24:23.390359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:53.624 [2024-11-26 17:24:23.495587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:53.624 [2024-11-26 17:24:23.495670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:53.624 [2024-11-26 17:24:23.495689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 BaseBdev2 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 [ 00:27:53.624 { 00:27:53.624 "name": "BaseBdev2", 00:27:53.624 "aliases": [ 00:27:53.624 "8d4dc15d-350b-4b7a-aae8-8feff715d385" 00:27:53.624 ], 00:27:53.624 "product_name": "Malloc disk", 00:27:53.624 "block_size": 512, 00:27:53.624 "num_blocks": 65536, 00:27:53.624 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:53.624 "assigned_rate_limits": { 00:27:53.624 "rw_ios_per_sec": 0, 00:27:53.624 "rw_mbytes_per_sec": 0, 00:27:53.624 "r_mbytes_per_sec": 0, 00:27:53.624 "w_mbytes_per_sec": 0 00:27:53.624 }, 00:27:53.624 "claimed": false, 00:27:53.624 "zoned": false, 00:27:53.624 "supported_io_types": { 00:27:53.624 "read": true, 00:27:53.624 "write": true, 00:27:53.624 "unmap": true, 00:27:53.624 "flush": true, 00:27:53.624 "reset": true, 00:27:53.624 "nvme_admin": false, 00:27:53.624 "nvme_io": false, 00:27:53.624 "nvme_io_md": false, 00:27:53.624 "write_zeroes": true, 00:27:53.624 "zcopy": true, 00:27:53.624 "get_zone_info": false, 00:27:53.624 "zone_management": false, 00:27:53.624 "zone_append": false, 00:27:53.624 "compare": false, 00:27:53.624 "compare_and_write": false, 00:27:53.624 "abort": true, 00:27:53.624 "seek_hole": false, 00:27:53.624 "seek_data": false, 00:27:53.624 "copy": true, 00:27:53.624 "nvme_iov_md": false 00:27:53.624 }, 00:27:53.624 "memory_domains": [ 00:27:53.624 { 00:27:53.624 "dma_device_id": "system", 00:27:53.624 "dma_device_type": 1 00:27:53.624 }, 00:27:53.624 { 00:27:53.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.624 "dma_device_type": 2 00:27:53.624 } 00:27:53.624 ], 00:27:53.624 "driver_specific": {} 00:27:53.624 } 00:27:53.624 ] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 BaseBdev3 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.624 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.624 [ 00:27:53.624 { 00:27:53.624 "name": "BaseBdev3", 00:27:53.624 "aliases": [ 00:27:53.624 "a97e5484-8179-4761-a47c-b1ea68f903f9" 00:27:53.624 ], 00:27:53.624 "product_name": "Malloc disk", 00:27:53.624 "block_size": 512, 00:27:53.624 "num_blocks": 65536, 00:27:53.624 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:53.624 "assigned_rate_limits": { 00:27:53.624 "rw_ios_per_sec": 0, 00:27:53.624 "rw_mbytes_per_sec": 0, 00:27:53.624 "r_mbytes_per_sec": 0, 00:27:53.624 "w_mbytes_per_sec": 0 00:27:53.624 }, 00:27:53.624 "claimed": false, 00:27:53.624 "zoned": false, 00:27:53.624 "supported_io_types": { 00:27:53.624 "read": true, 00:27:53.624 "write": true, 00:27:53.624 "unmap": true, 00:27:53.624 "flush": true, 00:27:53.624 "reset": true, 00:27:53.624 "nvme_admin": false, 00:27:53.624 "nvme_io": false, 00:27:53.624 "nvme_io_md": false, 00:27:53.624 "write_zeroes": true, 00:27:53.624 "zcopy": true, 00:27:53.624 "get_zone_info": false, 00:27:53.624 "zone_management": false, 00:27:53.624 "zone_append": false, 00:27:53.624 "compare": false, 00:27:53.624 "compare_and_write": false, 00:27:53.624 "abort": true, 00:27:53.624 "seek_hole": false, 00:27:53.624 "seek_data": false, 00:27:53.624 "copy": true, 00:27:53.624 "nvme_iov_md": false 00:27:53.624 }, 00:27:53.624 "memory_domains": [ 00:27:53.624 { 00:27:53.624 "dma_device_id": "system", 00:27:53.624 "dma_device_type": 1 00:27:53.624 }, 00:27:53.624 { 00:27:53.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.884 "dma_device_type": 2 00:27:53.884 } 00:27:53.884 ], 00:27:53.884 "driver_specific": {} 00:27:53.884 } 00:27:53.884 ] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.884 BaseBdev4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.884 [ 00:27:53.884 { 00:27:53.884 "name": "BaseBdev4", 00:27:53.884 "aliases": [ 00:27:53.884 "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9" 00:27:53.884 ], 00:27:53.884 "product_name": "Malloc disk", 00:27:53.884 "block_size": 512, 00:27:53.884 "num_blocks": 65536, 00:27:53.884 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:53.884 "assigned_rate_limits": { 00:27:53.884 "rw_ios_per_sec": 0, 00:27:53.884 "rw_mbytes_per_sec": 0, 00:27:53.884 "r_mbytes_per_sec": 0, 00:27:53.884 "w_mbytes_per_sec": 0 00:27:53.884 }, 00:27:53.884 "claimed": false, 00:27:53.884 "zoned": false, 00:27:53.884 "supported_io_types": { 00:27:53.884 "read": true, 00:27:53.884 "write": true, 00:27:53.884 "unmap": true, 00:27:53.884 "flush": true, 00:27:53.884 "reset": true, 00:27:53.884 "nvme_admin": false, 00:27:53.884 "nvme_io": false, 00:27:53.884 "nvme_io_md": false, 00:27:53.884 "write_zeroes": true, 00:27:53.884 "zcopy": true, 00:27:53.884 "get_zone_info": false, 00:27:53.884 "zone_management": false, 00:27:53.884 "zone_append": false, 00:27:53.884 "compare": false, 00:27:53.884 "compare_and_write": false, 00:27:53.884 "abort": true, 00:27:53.884 "seek_hole": false, 00:27:53.884 "seek_data": false, 00:27:53.884 "copy": true, 00:27:53.884 "nvme_iov_md": false 00:27:53.884 }, 00:27:53.884 "memory_domains": [ 00:27:53.884 { 00:27:53.884 "dma_device_id": "system", 00:27:53.884 "dma_device_type": 1 00:27:53.884 }, 00:27:53.884 { 00:27:53.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.884 "dma_device_type": 2 00:27:53.884 } 00:27:53.884 ], 00:27:53.884 "driver_specific": {} 00:27:53.884 } 00:27:53.884 ] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.884 [2024-11-26 17:24:23.837264] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:53.884 [2024-11-26 17:24:23.837578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:53.884 [2024-11-26 17:24:23.837703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.884 [2024-11-26 17:24:23.840211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.884 [2024-11-26 17:24:23.840415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.884 "name": "Existed_Raid", 00:27:53.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.884 "strip_size_kb": 0, 00:27:53.884 "state": "configuring", 00:27:53.884 "raid_level": "raid1", 00:27:53.884 "superblock": false, 00:27:53.884 "num_base_bdevs": 4, 00:27:53.884 "num_base_bdevs_discovered": 3, 00:27:53.884 "num_base_bdevs_operational": 4, 00:27:53.884 "base_bdevs_list": [ 00:27:53.884 { 00:27:53.884 "name": "BaseBdev1", 00:27:53.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.884 "is_configured": false, 00:27:53.884 "data_offset": 0, 00:27:53.884 "data_size": 0 00:27:53.884 }, 00:27:53.884 { 00:27:53.884 "name": "BaseBdev2", 00:27:53.884 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:53.884 "is_configured": true, 00:27:53.884 "data_offset": 0, 00:27:53.884 "data_size": 65536 00:27:53.884 }, 00:27:53.884 { 00:27:53.884 "name": "BaseBdev3", 00:27:53.884 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:53.884 "is_configured": true, 00:27:53.884 "data_offset": 0, 00:27:53.884 "data_size": 65536 00:27:53.884 }, 00:27:53.884 { 00:27:53.884 "name": "BaseBdev4", 00:27:53.884 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:53.884 "is_configured": true, 00:27:53.884 "data_offset": 0, 00:27:53.884 "data_size": 65536 00:27:53.884 } 00:27:53.884 ] 00:27:53.884 }' 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.884 17:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.453 [2024-11-26 17:24:24.268748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.453 "name": "Existed_Raid", 00:27:54.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.453 "strip_size_kb": 0, 00:27:54.453 "state": "configuring", 00:27:54.453 "raid_level": "raid1", 00:27:54.453 "superblock": false, 00:27:54.453 "num_base_bdevs": 4, 00:27:54.453 "num_base_bdevs_discovered": 2, 00:27:54.453 "num_base_bdevs_operational": 4, 00:27:54.453 "base_bdevs_list": [ 00:27:54.453 { 00:27:54.453 "name": "BaseBdev1", 00:27:54.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.453 "is_configured": false, 00:27:54.453 "data_offset": 0, 00:27:54.453 "data_size": 0 00:27:54.453 }, 00:27:54.453 { 00:27:54.453 "name": null, 00:27:54.453 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:54.453 "is_configured": false, 00:27:54.453 "data_offset": 0, 00:27:54.453 "data_size": 65536 00:27:54.453 }, 00:27:54.453 { 00:27:54.453 "name": "BaseBdev3", 00:27:54.453 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:54.453 "is_configured": true, 00:27:54.453 "data_offset": 0, 00:27:54.453 "data_size": 65536 00:27:54.453 }, 00:27:54.453 { 00:27:54.453 "name": "BaseBdev4", 00:27:54.453 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:54.453 "is_configured": true, 00:27:54.453 "data_offset": 0, 00:27:54.453 "data_size": 65536 00:27:54.453 } 00:27:54.453 ] 00:27:54.453 }' 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.453 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.713 [2024-11-26 17:24:24.809493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:54.713 BaseBdev1 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.713 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.972 [ 00:27:54.972 { 00:27:54.972 "name": "BaseBdev1", 00:27:54.972 "aliases": [ 00:27:54.972 "da2ac7ad-5660-46ab-9ed6-a7878ea21031" 00:27:54.972 ], 00:27:54.972 "product_name": "Malloc disk", 00:27:54.972 "block_size": 512, 00:27:54.972 "num_blocks": 65536, 00:27:54.972 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:54.972 "assigned_rate_limits": { 00:27:54.972 "rw_ios_per_sec": 0, 00:27:54.972 "rw_mbytes_per_sec": 0, 00:27:54.972 "r_mbytes_per_sec": 0, 00:27:54.972 "w_mbytes_per_sec": 0 00:27:54.972 }, 00:27:54.972 "claimed": true, 00:27:54.972 "claim_type": "exclusive_write", 00:27:54.972 "zoned": false, 00:27:54.972 "supported_io_types": { 00:27:54.972 "read": true, 00:27:54.972 "write": true, 00:27:54.972 "unmap": true, 00:27:54.972 "flush": true, 00:27:54.972 "reset": true, 00:27:54.972 "nvme_admin": false, 00:27:54.972 "nvme_io": false, 00:27:54.972 "nvme_io_md": false, 00:27:54.972 "write_zeroes": true, 00:27:54.972 "zcopy": true, 00:27:54.972 "get_zone_info": false, 00:27:54.972 "zone_management": false, 00:27:54.972 "zone_append": false, 00:27:54.972 "compare": false, 00:27:54.972 "compare_and_write": false, 00:27:54.972 "abort": true, 00:27:54.972 "seek_hole": false, 00:27:54.972 "seek_data": false, 00:27:54.972 "copy": true, 00:27:54.972 "nvme_iov_md": false 00:27:54.972 }, 00:27:54.972 "memory_domains": [ 00:27:54.972 { 00:27:54.972 "dma_device_id": "system", 00:27:54.972 "dma_device_type": 1 00:27:54.972 }, 00:27:54.972 { 00:27:54.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.972 "dma_device_type": 2 00:27:54.972 } 00:27:54.972 ], 00:27:54.972 "driver_specific": {} 00:27:54.972 } 00:27:54.972 ] 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.972 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.972 "name": "Existed_Raid", 00:27:54.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.972 "strip_size_kb": 0, 00:27:54.972 "state": "configuring", 00:27:54.972 "raid_level": "raid1", 00:27:54.972 "superblock": false, 00:27:54.972 "num_base_bdevs": 4, 00:27:54.972 "num_base_bdevs_discovered": 3, 00:27:54.972 "num_base_bdevs_operational": 4, 00:27:54.972 "base_bdevs_list": [ 00:27:54.972 { 00:27:54.972 "name": "BaseBdev1", 00:27:54.973 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:54.973 "is_configured": true, 00:27:54.973 "data_offset": 0, 00:27:54.973 "data_size": 65536 00:27:54.973 }, 00:27:54.973 { 00:27:54.973 "name": null, 00:27:54.973 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:54.973 "is_configured": false, 00:27:54.973 "data_offset": 0, 00:27:54.973 "data_size": 65536 00:27:54.973 }, 00:27:54.973 { 00:27:54.973 "name": "BaseBdev3", 00:27:54.973 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:54.973 "is_configured": true, 00:27:54.973 "data_offset": 0, 00:27:54.973 "data_size": 65536 00:27:54.973 }, 00:27:54.973 { 00:27:54.973 "name": "BaseBdev4", 00:27:54.973 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:54.973 "is_configured": true, 00:27:54.973 "data_offset": 0, 00:27:54.973 "data_size": 65536 00:27:54.973 } 00:27:54.973 ] 00:27:54.973 }' 00:27:54.973 17:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.973 17:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.231 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.231 [2024-11-26 17:24:25.340901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.490 "name": "Existed_Raid", 00:27:55.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.490 "strip_size_kb": 0, 00:27:55.490 "state": "configuring", 00:27:55.490 "raid_level": "raid1", 00:27:55.490 "superblock": false, 00:27:55.490 "num_base_bdevs": 4, 00:27:55.490 "num_base_bdevs_discovered": 2, 00:27:55.490 "num_base_bdevs_operational": 4, 00:27:55.490 "base_bdevs_list": [ 00:27:55.490 { 00:27:55.490 "name": "BaseBdev1", 00:27:55.490 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:55.490 "is_configured": true, 00:27:55.490 "data_offset": 0, 00:27:55.490 "data_size": 65536 00:27:55.490 }, 00:27:55.490 { 00:27:55.490 "name": null, 00:27:55.490 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:55.490 "is_configured": false, 00:27:55.490 "data_offset": 0, 00:27:55.490 "data_size": 65536 00:27:55.490 }, 00:27:55.490 { 00:27:55.490 "name": null, 00:27:55.490 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:55.490 "is_configured": false, 00:27:55.490 "data_offset": 0, 00:27:55.490 "data_size": 65536 00:27:55.490 }, 00:27:55.490 { 00:27:55.490 "name": "BaseBdev4", 00:27:55.490 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:55.490 "is_configured": true, 00:27:55.490 "data_offset": 0, 00:27:55.490 "data_size": 65536 00:27:55.490 } 00:27:55.490 ] 00:27:55.490 }' 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.490 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.748 [2024-11-26 17:24:25.852728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.748 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:56.006 "name": "Existed_Raid", 00:27:56.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.006 "strip_size_kb": 0, 00:27:56.006 "state": "configuring", 00:27:56.006 "raid_level": "raid1", 00:27:56.006 "superblock": false, 00:27:56.006 "num_base_bdevs": 4, 00:27:56.006 "num_base_bdevs_discovered": 3, 00:27:56.006 "num_base_bdevs_operational": 4, 00:27:56.006 "base_bdevs_list": [ 00:27:56.006 { 00:27:56.006 "name": "BaseBdev1", 00:27:56.006 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:56.006 "is_configured": true, 00:27:56.006 "data_offset": 0, 00:27:56.006 "data_size": 65536 00:27:56.006 }, 00:27:56.006 { 00:27:56.006 "name": null, 00:27:56.006 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:56.006 "is_configured": false, 00:27:56.006 "data_offset": 0, 00:27:56.006 "data_size": 65536 00:27:56.006 }, 00:27:56.006 { 00:27:56.006 "name": "BaseBdev3", 00:27:56.006 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:56.006 "is_configured": true, 00:27:56.006 "data_offset": 0, 00:27:56.006 "data_size": 65536 00:27:56.006 }, 00:27:56.006 { 00:27:56.006 "name": "BaseBdev4", 00:27:56.006 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:56.006 "is_configured": true, 00:27:56.006 "data_offset": 0, 00:27:56.006 "data_size": 65536 00:27:56.006 } 00:27:56.006 ] 00:27:56.006 }' 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:56.006 17:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.264 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.264 [2024-11-26 17:24:26.360768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:56.523 "name": "Existed_Raid", 00:27:56.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.523 "strip_size_kb": 0, 00:27:56.523 "state": "configuring", 00:27:56.523 "raid_level": "raid1", 00:27:56.523 "superblock": false, 00:27:56.523 "num_base_bdevs": 4, 00:27:56.523 "num_base_bdevs_discovered": 2, 00:27:56.523 "num_base_bdevs_operational": 4, 00:27:56.523 "base_bdevs_list": [ 00:27:56.523 { 00:27:56.523 "name": null, 00:27:56.523 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:56.523 "is_configured": false, 00:27:56.523 "data_offset": 0, 00:27:56.523 "data_size": 65536 00:27:56.523 }, 00:27:56.523 { 00:27:56.523 "name": null, 00:27:56.523 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:56.523 "is_configured": false, 00:27:56.523 "data_offset": 0, 00:27:56.523 "data_size": 65536 00:27:56.523 }, 00:27:56.523 { 00:27:56.523 "name": "BaseBdev3", 00:27:56.523 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:56.523 "is_configured": true, 00:27:56.523 "data_offset": 0, 00:27:56.523 "data_size": 65536 00:27:56.523 }, 00:27:56.523 { 00:27:56.523 "name": "BaseBdev4", 00:27:56.523 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:56.523 "is_configured": true, 00:27:56.523 "data_offset": 0, 00:27:56.523 "data_size": 65536 00:27:56.523 } 00:27:56.523 ] 00:27:56.523 }' 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:56.523 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.092 [2024-11-26 17:24:26.953039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.092 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.092 "name": "Existed_Raid", 00:27:57.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.092 "strip_size_kb": 0, 00:27:57.092 "state": "configuring", 00:27:57.092 "raid_level": "raid1", 00:27:57.092 "superblock": false, 00:27:57.092 "num_base_bdevs": 4, 00:27:57.092 "num_base_bdevs_discovered": 3, 00:27:57.092 "num_base_bdevs_operational": 4, 00:27:57.092 "base_bdevs_list": [ 00:27:57.092 { 00:27:57.092 "name": null, 00:27:57.092 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:57.092 "is_configured": false, 00:27:57.092 "data_offset": 0, 00:27:57.092 "data_size": 65536 00:27:57.092 }, 00:27:57.092 { 00:27:57.092 "name": "BaseBdev2", 00:27:57.092 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:57.092 "is_configured": true, 00:27:57.092 "data_offset": 0, 00:27:57.092 "data_size": 65536 00:27:57.092 }, 00:27:57.092 { 00:27:57.092 "name": "BaseBdev3", 00:27:57.092 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:57.092 "is_configured": true, 00:27:57.092 "data_offset": 0, 00:27:57.093 "data_size": 65536 00:27:57.093 }, 00:27:57.093 { 00:27:57.093 "name": "BaseBdev4", 00:27:57.093 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:57.093 "is_configured": true, 00:27:57.093 "data_offset": 0, 00:27:57.093 "data_size": 65536 00:27:57.093 } 00:27:57.093 ] 00:27:57.093 }' 00:27:57.093 17:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.093 17:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.352 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u da2ac7ad-5660-46ab-9ed6-a7878ea21031 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 [2024-11-26 17:24:27.528500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:57.610 [2024-11-26 17:24:27.528817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:57.610 [2024-11-26 17:24:27.528844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:57.610 [2024-11-26 17:24:27.529177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:57.610 [2024-11-26 17:24:27.529363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:57.610 [2024-11-26 17:24:27.529374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:57.610 [2024-11-26 17:24:27.529717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.610 NewBaseBdev 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 [ 00:27:57.610 { 00:27:57.610 "name": "NewBaseBdev", 00:27:57.610 "aliases": [ 00:27:57.610 "da2ac7ad-5660-46ab-9ed6-a7878ea21031" 00:27:57.610 ], 00:27:57.610 "product_name": "Malloc disk", 00:27:57.610 "block_size": 512, 00:27:57.610 "num_blocks": 65536, 00:27:57.610 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:57.610 "assigned_rate_limits": { 00:27:57.610 "rw_ios_per_sec": 0, 00:27:57.610 "rw_mbytes_per_sec": 0, 00:27:57.610 "r_mbytes_per_sec": 0, 00:27:57.610 "w_mbytes_per_sec": 0 00:27:57.610 }, 00:27:57.610 "claimed": true, 00:27:57.610 "claim_type": "exclusive_write", 00:27:57.610 "zoned": false, 00:27:57.610 "supported_io_types": { 00:27:57.610 "read": true, 00:27:57.610 "write": true, 00:27:57.610 "unmap": true, 00:27:57.610 "flush": true, 00:27:57.610 "reset": true, 00:27:57.610 "nvme_admin": false, 00:27:57.610 "nvme_io": false, 00:27:57.610 "nvme_io_md": false, 00:27:57.610 "write_zeroes": true, 00:27:57.610 "zcopy": true, 00:27:57.610 "get_zone_info": false, 00:27:57.610 "zone_management": false, 00:27:57.610 "zone_append": false, 00:27:57.610 "compare": false, 00:27:57.610 "compare_and_write": false, 00:27:57.610 "abort": true, 00:27:57.610 "seek_hole": false, 00:27:57.610 "seek_data": false, 00:27:57.610 "copy": true, 00:27:57.610 "nvme_iov_md": false 00:27:57.610 }, 00:27:57.610 "memory_domains": [ 00:27:57.610 { 00:27:57.610 "dma_device_id": "system", 00:27:57.610 "dma_device_type": 1 00:27:57.610 }, 00:27:57.610 { 00:27:57.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.610 "dma_device_type": 2 00:27:57.610 } 00:27:57.610 ], 00:27:57.610 "driver_specific": {} 00:27:57.610 } 00:27:57.610 ] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.610 "name": "Existed_Raid", 00:27:57.610 "uuid": "96b572af-d2a4-43fe-ae3c-8215aaf47e1f", 00:27:57.610 "strip_size_kb": 0, 00:27:57.610 "state": "online", 00:27:57.610 "raid_level": "raid1", 00:27:57.610 "superblock": false, 00:27:57.610 "num_base_bdevs": 4, 00:27:57.610 "num_base_bdevs_discovered": 4, 00:27:57.610 "num_base_bdevs_operational": 4, 00:27:57.610 "base_bdevs_list": [ 00:27:57.610 { 00:27:57.610 "name": "NewBaseBdev", 00:27:57.610 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:57.610 "is_configured": true, 00:27:57.610 "data_offset": 0, 00:27:57.610 "data_size": 65536 00:27:57.610 }, 00:27:57.610 { 00:27:57.610 "name": "BaseBdev2", 00:27:57.610 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:57.610 "is_configured": true, 00:27:57.610 "data_offset": 0, 00:27:57.610 "data_size": 65536 00:27:57.611 }, 00:27:57.611 { 00:27:57.611 "name": "BaseBdev3", 00:27:57.611 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:57.611 "is_configured": true, 00:27:57.611 "data_offset": 0, 00:27:57.611 "data_size": 65536 00:27:57.611 }, 00:27:57.611 { 00:27:57.611 "name": "BaseBdev4", 00:27:57.611 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:57.611 "is_configured": true, 00:27:57.611 "data_offset": 0, 00:27:57.611 "data_size": 65536 00:27:57.611 } 00:27:57.611 ] 00:27:57.611 }' 00:27:57.611 17:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.611 17:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.178 [2024-11-26 17:24:28.048207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.178 "name": "Existed_Raid", 00:27:58.178 "aliases": [ 00:27:58.178 "96b572af-d2a4-43fe-ae3c-8215aaf47e1f" 00:27:58.178 ], 00:27:58.178 "product_name": "Raid Volume", 00:27:58.178 "block_size": 512, 00:27:58.178 "num_blocks": 65536, 00:27:58.178 "uuid": "96b572af-d2a4-43fe-ae3c-8215aaf47e1f", 00:27:58.178 "assigned_rate_limits": { 00:27:58.178 "rw_ios_per_sec": 0, 00:27:58.178 "rw_mbytes_per_sec": 0, 00:27:58.178 "r_mbytes_per_sec": 0, 00:27:58.178 "w_mbytes_per_sec": 0 00:27:58.178 }, 00:27:58.178 "claimed": false, 00:27:58.178 "zoned": false, 00:27:58.178 "supported_io_types": { 00:27:58.178 "read": true, 00:27:58.178 "write": true, 00:27:58.178 "unmap": false, 00:27:58.178 "flush": false, 00:27:58.178 "reset": true, 00:27:58.178 "nvme_admin": false, 00:27:58.178 "nvme_io": false, 00:27:58.178 "nvme_io_md": false, 00:27:58.178 "write_zeroes": true, 00:27:58.178 "zcopy": false, 00:27:58.178 "get_zone_info": false, 00:27:58.178 "zone_management": false, 00:27:58.178 "zone_append": false, 00:27:58.178 "compare": false, 00:27:58.178 "compare_and_write": false, 00:27:58.178 "abort": false, 00:27:58.178 "seek_hole": false, 00:27:58.178 "seek_data": false, 00:27:58.178 "copy": false, 00:27:58.178 "nvme_iov_md": false 00:27:58.178 }, 00:27:58.178 "memory_domains": [ 00:27:58.178 { 00:27:58.178 "dma_device_id": "system", 00:27:58.178 "dma_device_type": 1 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.178 "dma_device_type": 2 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "system", 00:27:58.178 "dma_device_type": 1 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.178 "dma_device_type": 2 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "system", 00:27:58.178 "dma_device_type": 1 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.178 "dma_device_type": 2 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "system", 00:27:58.178 "dma_device_type": 1 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.178 "dma_device_type": 2 00:27:58.178 } 00:27:58.178 ], 00:27:58.178 "driver_specific": { 00:27:58.178 "raid": { 00:27:58.178 "uuid": "96b572af-d2a4-43fe-ae3c-8215aaf47e1f", 00:27:58.178 "strip_size_kb": 0, 00:27:58.178 "state": "online", 00:27:58.178 "raid_level": "raid1", 00:27:58.178 "superblock": false, 00:27:58.178 "num_base_bdevs": 4, 00:27:58.178 "num_base_bdevs_discovered": 4, 00:27:58.178 "num_base_bdevs_operational": 4, 00:27:58.178 "base_bdevs_list": [ 00:27:58.178 { 00:27:58.178 "name": "NewBaseBdev", 00:27:58.178 "uuid": "da2ac7ad-5660-46ab-9ed6-a7878ea21031", 00:27:58.178 "is_configured": true, 00:27:58.178 "data_offset": 0, 00:27:58.178 "data_size": 65536 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "name": "BaseBdev2", 00:27:58.178 "uuid": "8d4dc15d-350b-4b7a-aae8-8feff715d385", 00:27:58.178 "is_configured": true, 00:27:58.178 "data_offset": 0, 00:27:58.178 "data_size": 65536 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "name": "BaseBdev3", 00:27:58.178 "uuid": "a97e5484-8179-4761-a47c-b1ea68f903f9", 00:27:58.178 "is_configured": true, 00:27:58.178 "data_offset": 0, 00:27:58.178 "data_size": 65536 00:27:58.178 }, 00:27:58.178 { 00:27:58.178 "name": "BaseBdev4", 00:27:58.178 "uuid": "6e4ad8f0-b9e1-4f7c-9fab-67322f7d52f9", 00:27:58.178 "is_configured": true, 00:27:58.178 "data_offset": 0, 00:27:58.178 "data_size": 65536 00:27:58.178 } 00:27:58.178 ] 00:27:58.178 } 00:27:58.178 } 00:27:58.178 }' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:58.178 BaseBdev2 00:27:58.178 BaseBdev3 00:27:58.178 BaseBdev4' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:58.178 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.437 [2024-11-26 17:24:28.351490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:58.437 [2024-11-26 17:24:28.351536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:58.437 [2024-11-26 17:24:28.351648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:58.437 [2024-11-26 17:24:28.351975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:58.437 [2024-11-26 17:24:28.351993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73289 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73289 ']' 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73289 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73289 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:58.437 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:58.437 killing process with pid 73289 00:27:58.438 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73289' 00:27:58.438 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73289 00:27:58.438 [2024-11-26 17:24:28.404346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:58.438 17:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73289 00:27:59.005 [2024-11-26 17:24:28.817822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:59.939 ************************************ 00:27:59.939 END TEST raid_state_function_test 00:27:59.939 ************************************ 00:27:59.939 17:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:59.939 00:27:59.939 real 0m11.940s 00:27:59.939 user 0m18.753s 00:27:59.939 sys 0m2.603s 00:27:59.939 17:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.939 17:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.199 17:24:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:00.199 17:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:00.199 17:24:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.199 17:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:00.199 ************************************ 00:28:00.199 START TEST raid_state_function_test_sb 00:28:00.199 ************************************ 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73957 00:28:00.199 Process raid pid: 73957 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73957' 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73957 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73957 ']' 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.199 17:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.199 [2024-11-26 17:24:30.182748] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:00.199 [2024-11-26 17:24:30.183117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.458 [2024-11-26 17:24:30.361074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.458 [2024-11-26 17:24:30.506643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.717 [2024-11-26 17:24:30.733049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.717 [2024-11-26 17:24:30.733110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.977 [2024-11-26 17:24:31.026804] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:00.977 [2024-11-26 17:24:31.026873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:00.977 [2024-11-26 17:24:31.026886] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:00.977 [2024-11-26 17:24:31.026900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:00.977 [2024-11-26 17:24:31.026908] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:00.977 [2024-11-26 17:24:31.026920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:00.977 [2024-11-26 17:24:31.026929] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:00.977 [2024-11-26 17:24:31.026942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.977 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:00.977 "name": "Existed_Raid", 00:28:00.977 "uuid": "0a29e01d-4ffb-45e1-b513-395044cd32e8", 00:28:00.977 "strip_size_kb": 0, 00:28:00.977 "state": "configuring", 00:28:00.977 "raid_level": "raid1", 00:28:00.977 "superblock": true, 00:28:00.977 "num_base_bdevs": 4, 00:28:00.977 "num_base_bdevs_discovered": 0, 00:28:00.977 "num_base_bdevs_operational": 4, 00:28:00.977 "base_bdevs_list": [ 00:28:00.977 { 00:28:00.977 "name": "BaseBdev1", 00:28:00.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.977 "is_configured": false, 00:28:00.977 "data_offset": 0, 00:28:00.977 "data_size": 0 00:28:00.977 }, 00:28:00.977 { 00:28:00.977 "name": "BaseBdev2", 00:28:00.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.977 "is_configured": false, 00:28:00.977 "data_offset": 0, 00:28:00.977 "data_size": 0 00:28:00.977 }, 00:28:00.977 { 00:28:00.977 "name": "BaseBdev3", 00:28:00.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.977 "is_configured": false, 00:28:00.977 "data_offset": 0, 00:28:00.977 "data_size": 0 00:28:00.977 }, 00:28:00.977 { 00:28:00.977 "name": "BaseBdev4", 00:28:00.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.977 "is_configured": false, 00:28:00.977 "data_offset": 0, 00:28:00.978 "data_size": 0 00:28:00.978 } 00:28:00.978 ] 00:28:00.978 }' 00:28:00.978 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:00.978 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 [2024-11-26 17:24:31.446174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:01.546 [2024-11-26 17:24:31.446227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 [2024-11-26 17:24:31.454137] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.546 [2024-11-26 17:24:31.454192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.546 [2024-11-26 17:24:31.454204] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.546 [2024-11-26 17:24:31.454216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.546 [2024-11-26 17:24:31.454225] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:01.546 [2024-11-26 17:24:31.454238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:01.546 [2024-11-26 17:24:31.454246] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:01.546 [2024-11-26 17:24:31.454258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 [2024-11-26 17:24:31.500609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.546 BaseBdev1 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.546 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.546 [ 00:28:01.546 { 00:28:01.546 "name": "BaseBdev1", 00:28:01.546 "aliases": [ 00:28:01.546 "8fa7566c-70ab-45db-b2ff-42f88427796d" 00:28:01.546 ], 00:28:01.546 "product_name": "Malloc disk", 00:28:01.546 "block_size": 512, 00:28:01.546 "num_blocks": 65536, 00:28:01.546 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:01.546 "assigned_rate_limits": { 00:28:01.546 "rw_ios_per_sec": 0, 00:28:01.546 "rw_mbytes_per_sec": 0, 00:28:01.546 "r_mbytes_per_sec": 0, 00:28:01.547 "w_mbytes_per_sec": 0 00:28:01.547 }, 00:28:01.547 "claimed": true, 00:28:01.547 "claim_type": "exclusive_write", 00:28:01.547 "zoned": false, 00:28:01.547 "supported_io_types": { 00:28:01.547 "read": true, 00:28:01.547 "write": true, 00:28:01.547 "unmap": true, 00:28:01.547 "flush": true, 00:28:01.547 "reset": true, 00:28:01.547 "nvme_admin": false, 00:28:01.547 "nvme_io": false, 00:28:01.547 "nvme_io_md": false, 00:28:01.547 "write_zeroes": true, 00:28:01.547 "zcopy": true, 00:28:01.547 "get_zone_info": false, 00:28:01.547 "zone_management": false, 00:28:01.547 "zone_append": false, 00:28:01.547 "compare": false, 00:28:01.547 "compare_and_write": false, 00:28:01.547 "abort": true, 00:28:01.547 "seek_hole": false, 00:28:01.547 "seek_data": false, 00:28:01.547 "copy": true, 00:28:01.547 "nvme_iov_md": false 00:28:01.547 }, 00:28:01.547 "memory_domains": [ 00:28:01.547 { 00:28:01.547 "dma_device_id": "system", 00:28:01.547 "dma_device_type": 1 00:28:01.547 }, 00:28:01.547 { 00:28:01.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.547 "dma_device_type": 2 00:28:01.547 } 00:28:01.547 ], 00:28:01.547 "driver_specific": {} 00:28:01.547 } 00:28:01.547 ] 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.547 "name": "Existed_Raid", 00:28:01.547 "uuid": "6c9d7ef1-580f-4680-8ac4-321a70153242", 00:28:01.547 "strip_size_kb": 0, 00:28:01.547 "state": "configuring", 00:28:01.547 "raid_level": "raid1", 00:28:01.547 "superblock": true, 00:28:01.547 "num_base_bdevs": 4, 00:28:01.547 "num_base_bdevs_discovered": 1, 00:28:01.547 "num_base_bdevs_operational": 4, 00:28:01.547 "base_bdevs_list": [ 00:28:01.547 { 00:28:01.547 "name": "BaseBdev1", 00:28:01.547 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:01.547 "is_configured": true, 00:28:01.547 "data_offset": 2048, 00:28:01.547 "data_size": 63488 00:28:01.547 }, 00:28:01.547 { 00:28:01.547 "name": "BaseBdev2", 00:28:01.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.547 "is_configured": false, 00:28:01.547 "data_offset": 0, 00:28:01.547 "data_size": 0 00:28:01.547 }, 00:28:01.547 { 00:28:01.547 "name": "BaseBdev3", 00:28:01.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.547 "is_configured": false, 00:28:01.547 "data_offset": 0, 00:28:01.547 "data_size": 0 00:28:01.547 }, 00:28:01.547 { 00:28:01.547 "name": "BaseBdev4", 00:28:01.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.547 "is_configured": false, 00:28:01.547 "data_offset": 0, 00:28:01.547 "data_size": 0 00:28:01.547 } 00:28:01.547 ] 00:28:01.547 }' 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.547 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.116 [2024-11-26 17:24:31.952020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:02.116 [2024-11-26 17:24:31.952089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.116 [2024-11-26 17:24:31.960059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.116 [2024-11-26 17:24:31.962472] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:02.116 [2024-11-26 17:24:31.962534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:02.116 [2024-11-26 17:24:31.962547] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:02.116 [2024-11-26 17:24:31.962563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:02.116 [2024-11-26 17:24:31.962571] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:02.116 [2024-11-26 17:24:31.962582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.116 17:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.116 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.116 "name": "Existed_Raid", 00:28:02.116 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:02.116 "strip_size_kb": 0, 00:28:02.116 "state": "configuring", 00:28:02.116 "raid_level": "raid1", 00:28:02.116 "superblock": true, 00:28:02.116 "num_base_bdevs": 4, 00:28:02.116 "num_base_bdevs_discovered": 1, 00:28:02.116 "num_base_bdevs_operational": 4, 00:28:02.116 "base_bdevs_list": [ 00:28:02.116 { 00:28:02.116 "name": "BaseBdev1", 00:28:02.116 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:02.116 "is_configured": true, 00:28:02.116 "data_offset": 2048, 00:28:02.116 "data_size": 63488 00:28:02.116 }, 00:28:02.116 { 00:28:02.116 "name": "BaseBdev2", 00:28:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.116 "is_configured": false, 00:28:02.116 "data_offset": 0, 00:28:02.116 "data_size": 0 00:28:02.116 }, 00:28:02.116 { 00:28:02.116 "name": "BaseBdev3", 00:28:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.116 "is_configured": false, 00:28:02.116 "data_offset": 0, 00:28:02.116 "data_size": 0 00:28:02.116 }, 00:28:02.116 { 00:28:02.116 "name": "BaseBdev4", 00:28:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.116 "is_configured": false, 00:28:02.116 "data_offset": 0, 00:28:02.116 "data_size": 0 00:28:02.116 } 00:28:02.116 ] 00:28:02.116 }' 00:28:02.116 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.116 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.376 [2024-11-26 17:24:32.430511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:02.376 BaseBdev2 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.376 [ 00:28:02.376 { 00:28:02.376 "name": "BaseBdev2", 00:28:02.376 "aliases": [ 00:28:02.376 "58d0680b-a123-4d88-8b09-c490083a548c" 00:28:02.376 ], 00:28:02.376 "product_name": "Malloc disk", 00:28:02.376 "block_size": 512, 00:28:02.376 "num_blocks": 65536, 00:28:02.376 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:02.376 "assigned_rate_limits": { 00:28:02.376 "rw_ios_per_sec": 0, 00:28:02.376 "rw_mbytes_per_sec": 0, 00:28:02.376 "r_mbytes_per_sec": 0, 00:28:02.376 "w_mbytes_per_sec": 0 00:28:02.376 }, 00:28:02.376 "claimed": true, 00:28:02.376 "claim_type": "exclusive_write", 00:28:02.376 "zoned": false, 00:28:02.376 "supported_io_types": { 00:28:02.376 "read": true, 00:28:02.376 "write": true, 00:28:02.376 "unmap": true, 00:28:02.376 "flush": true, 00:28:02.376 "reset": true, 00:28:02.376 "nvme_admin": false, 00:28:02.376 "nvme_io": false, 00:28:02.376 "nvme_io_md": false, 00:28:02.376 "write_zeroes": true, 00:28:02.376 "zcopy": true, 00:28:02.376 "get_zone_info": false, 00:28:02.376 "zone_management": false, 00:28:02.376 "zone_append": false, 00:28:02.376 "compare": false, 00:28:02.376 "compare_and_write": false, 00:28:02.376 "abort": true, 00:28:02.376 "seek_hole": false, 00:28:02.376 "seek_data": false, 00:28:02.376 "copy": true, 00:28:02.376 "nvme_iov_md": false 00:28:02.376 }, 00:28:02.376 "memory_domains": [ 00:28:02.376 { 00:28:02.376 "dma_device_id": "system", 00:28:02.376 "dma_device_type": 1 00:28:02.376 }, 00:28:02.376 { 00:28:02.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.376 "dma_device_type": 2 00:28:02.376 } 00:28:02.376 ], 00:28:02.376 "driver_specific": {} 00:28:02.376 } 00:28:02.376 ] 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.376 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.635 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.635 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.635 "name": "Existed_Raid", 00:28:02.635 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:02.635 "strip_size_kb": 0, 00:28:02.635 "state": "configuring", 00:28:02.635 "raid_level": "raid1", 00:28:02.635 "superblock": true, 00:28:02.635 "num_base_bdevs": 4, 00:28:02.635 "num_base_bdevs_discovered": 2, 00:28:02.635 "num_base_bdevs_operational": 4, 00:28:02.635 "base_bdevs_list": [ 00:28:02.635 { 00:28:02.635 "name": "BaseBdev1", 00:28:02.635 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:02.635 "is_configured": true, 00:28:02.635 "data_offset": 2048, 00:28:02.635 "data_size": 63488 00:28:02.635 }, 00:28:02.635 { 00:28:02.635 "name": "BaseBdev2", 00:28:02.635 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:02.635 "is_configured": true, 00:28:02.635 "data_offset": 2048, 00:28:02.635 "data_size": 63488 00:28:02.635 }, 00:28:02.635 { 00:28:02.635 "name": "BaseBdev3", 00:28:02.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.635 "is_configured": false, 00:28:02.635 "data_offset": 0, 00:28:02.635 "data_size": 0 00:28:02.635 }, 00:28:02.635 { 00:28:02.635 "name": "BaseBdev4", 00:28:02.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.635 "is_configured": false, 00:28:02.635 "data_offset": 0, 00:28:02.635 "data_size": 0 00:28:02.635 } 00:28:02.635 ] 00:28:02.635 }' 00:28:02.635 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.635 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.895 [2024-11-26 17:24:32.853983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:02.895 BaseBdev3 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.895 [ 00:28:02.895 { 00:28:02.895 "name": "BaseBdev3", 00:28:02.895 "aliases": [ 00:28:02.895 "94d46df0-e774-4d9c-ab48-4afefff12dbd" 00:28:02.895 ], 00:28:02.895 "product_name": "Malloc disk", 00:28:02.895 "block_size": 512, 00:28:02.895 "num_blocks": 65536, 00:28:02.895 "uuid": "94d46df0-e774-4d9c-ab48-4afefff12dbd", 00:28:02.895 "assigned_rate_limits": { 00:28:02.895 "rw_ios_per_sec": 0, 00:28:02.895 "rw_mbytes_per_sec": 0, 00:28:02.895 "r_mbytes_per_sec": 0, 00:28:02.895 "w_mbytes_per_sec": 0 00:28:02.895 }, 00:28:02.895 "claimed": true, 00:28:02.895 "claim_type": "exclusive_write", 00:28:02.895 "zoned": false, 00:28:02.895 "supported_io_types": { 00:28:02.895 "read": true, 00:28:02.895 "write": true, 00:28:02.895 "unmap": true, 00:28:02.895 "flush": true, 00:28:02.895 "reset": true, 00:28:02.895 "nvme_admin": false, 00:28:02.895 "nvme_io": false, 00:28:02.895 "nvme_io_md": false, 00:28:02.895 "write_zeroes": true, 00:28:02.895 "zcopy": true, 00:28:02.895 "get_zone_info": false, 00:28:02.895 "zone_management": false, 00:28:02.895 "zone_append": false, 00:28:02.895 "compare": false, 00:28:02.895 "compare_and_write": false, 00:28:02.895 "abort": true, 00:28:02.895 "seek_hole": false, 00:28:02.895 "seek_data": false, 00:28:02.895 "copy": true, 00:28:02.895 "nvme_iov_md": false 00:28:02.895 }, 00:28:02.895 "memory_domains": [ 00:28:02.895 { 00:28:02.895 "dma_device_id": "system", 00:28:02.895 "dma_device_type": 1 00:28:02.895 }, 00:28:02.895 { 00:28:02.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.895 "dma_device_type": 2 00:28:02.895 } 00:28:02.895 ], 00:28:02.895 "driver_specific": {} 00:28:02.895 } 00:28:02.895 ] 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.895 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.896 "name": "Existed_Raid", 00:28:02.896 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:02.896 "strip_size_kb": 0, 00:28:02.896 "state": "configuring", 00:28:02.896 "raid_level": "raid1", 00:28:02.896 "superblock": true, 00:28:02.896 "num_base_bdevs": 4, 00:28:02.896 "num_base_bdevs_discovered": 3, 00:28:02.896 "num_base_bdevs_operational": 4, 00:28:02.896 "base_bdevs_list": [ 00:28:02.896 { 00:28:02.896 "name": "BaseBdev1", 00:28:02.896 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:02.896 "is_configured": true, 00:28:02.896 "data_offset": 2048, 00:28:02.896 "data_size": 63488 00:28:02.896 }, 00:28:02.896 { 00:28:02.896 "name": "BaseBdev2", 00:28:02.896 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:02.896 "is_configured": true, 00:28:02.896 "data_offset": 2048, 00:28:02.896 "data_size": 63488 00:28:02.896 }, 00:28:02.896 { 00:28:02.896 "name": "BaseBdev3", 00:28:02.896 "uuid": "94d46df0-e774-4d9c-ab48-4afefff12dbd", 00:28:02.896 "is_configured": true, 00:28:02.896 "data_offset": 2048, 00:28:02.896 "data_size": 63488 00:28:02.896 }, 00:28:02.896 { 00:28:02.896 "name": "BaseBdev4", 00:28:02.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.896 "is_configured": false, 00:28:02.896 "data_offset": 0, 00:28:02.896 "data_size": 0 00:28:02.896 } 00:28:02.896 ] 00:28:02.896 }' 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.896 17:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.465 [2024-11-26 17:24:33.339165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:03.465 [2024-11-26 17:24:33.339587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:03.465 [2024-11-26 17:24:33.339608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:03.465 BaseBdev4 00:28:03.465 [2024-11-26 17:24:33.339941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:03.465 [2024-11-26 17:24:33.340120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:03.465 [2024-11-26 17:24:33.340135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:03.465 [2024-11-26 17:24:33.340299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.465 [ 00:28:03.465 { 00:28:03.465 "name": "BaseBdev4", 00:28:03.465 "aliases": [ 00:28:03.465 "694867e2-70df-4c5a-95fc-eecc834bbade" 00:28:03.465 ], 00:28:03.465 "product_name": "Malloc disk", 00:28:03.465 "block_size": 512, 00:28:03.465 "num_blocks": 65536, 00:28:03.465 "uuid": "694867e2-70df-4c5a-95fc-eecc834bbade", 00:28:03.465 "assigned_rate_limits": { 00:28:03.465 "rw_ios_per_sec": 0, 00:28:03.465 "rw_mbytes_per_sec": 0, 00:28:03.465 "r_mbytes_per_sec": 0, 00:28:03.465 "w_mbytes_per_sec": 0 00:28:03.465 }, 00:28:03.465 "claimed": true, 00:28:03.465 "claim_type": "exclusive_write", 00:28:03.465 "zoned": false, 00:28:03.465 "supported_io_types": { 00:28:03.465 "read": true, 00:28:03.465 "write": true, 00:28:03.465 "unmap": true, 00:28:03.465 "flush": true, 00:28:03.465 "reset": true, 00:28:03.465 "nvme_admin": false, 00:28:03.465 "nvme_io": false, 00:28:03.465 "nvme_io_md": false, 00:28:03.465 "write_zeroes": true, 00:28:03.465 "zcopy": true, 00:28:03.465 "get_zone_info": false, 00:28:03.465 "zone_management": false, 00:28:03.465 "zone_append": false, 00:28:03.465 "compare": false, 00:28:03.465 "compare_and_write": false, 00:28:03.465 "abort": true, 00:28:03.465 "seek_hole": false, 00:28:03.465 "seek_data": false, 00:28:03.465 "copy": true, 00:28:03.465 "nvme_iov_md": false 00:28:03.465 }, 00:28:03.465 "memory_domains": [ 00:28:03.465 { 00:28:03.465 "dma_device_id": "system", 00:28:03.465 "dma_device_type": 1 00:28:03.465 }, 00:28:03.465 { 00:28:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.465 "dma_device_type": 2 00:28:03.465 } 00:28:03.465 ], 00:28:03.465 "driver_specific": {} 00:28:03.465 } 00:28:03.465 ] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.465 "name": "Existed_Raid", 00:28:03.465 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:03.465 "strip_size_kb": 0, 00:28:03.465 "state": "online", 00:28:03.465 "raid_level": "raid1", 00:28:03.465 "superblock": true, 00:28:03.465 "num_base_bdevs": 4, 00:28:03.465 "num_base_bdevs_discovered": 4, 00:28:03.465 "num_base_bdevs_operational": 4, 00:28:03.465 "base_bdevs_list": [ 00:28:03.465 { 00:28:03.465 "name": "BaseBdev1", 00:28:03.465 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:03.465 "is_configured": true, 00:28:03.465 "data_offset": 2048, 00:28:03.465 "data_size": 63488 00:28:03.465 }, 00:28:03.465 { 00:28:03.465 "name": "BaseBdev2", 00:28:03.465 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:03.465 "is_configured": true, 00:28:03.465 "data_offset": 2048, 00:28:03.465 "data_size": 63488 00:28:03.465 }, 00:28:03.465 { 00:28:03.465 "name": "BaseBdev3", 00:28:03.465 "uuid": "94d46df0-e774-4d9c-ab48-4afefff12dbd", 00:28:03.465 "is_configured": true, 00:28:03.465 "data_offset": 2048, 00:28:03.465 "data_size": 63488 00:28:03.465 }, 00:28:03.465 { 00:28:03.465 "name": "BaseBdev4", 00:28:03.465 "uuid": "694867e2-70df-4c5a-95fc-eecc834bbade", 00:28:03.465 "is_configured": true, 00:28:03.465 "data_offset": 2048, 00:28:03.465 "data_size": 63488 00:28:03.465 } 00:28:03.465 ] 00:28:03.465 }' 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.465 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.725 [2024-11-26 17:24:33.755038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.725 "name": "Existed_Raid", 00:28:03.725 "aliases": [ 00:28:03.725 "fb0c2b94-ff98-4ef1-80af-09844fe8349e" 00:28:03.725 ], 00:28:03.725 "product_name": "Raid Volume", 00:28:03.725 "block_size": 512, 00:28:03.725 "num_blocks": 63488, 00:28:03.725 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:03.725 "assigned_rate_limits": { 00:28:03.725 "rw_ios_per_sec": 0, 00:28:03.725 "rw_mbytes_per_sec": 0, 00:28:03.725 "r_mbytes_per_sec": 0, 00:28:03.725 "w_mbytes_per_sec": 0 00:28:03.725 }, 00:28:03.725 "claimed": false, 00:28:03.725 "zoned": false, 00:28:03.725 "supported_io_types": { 00:28:03.725 "read": true, 00:28:03.725 "write": true, 00:28:03.725 "unmap": false, 00:28:03.725 "flush": false, 00:28:03.725 "reset": true, 00:28:03.725 "nvme_admin": false, 00:28:03.725 "nvme_io": false, 00:28:03.725 "nvme_io_md": false, 00:28:03.725 "write_zeroes": true, 00:28:03.725 "zcopy": false, 00:28:03.725 "get_zone_info": false, 00:28:03.725 "zone_management": false, 00:28:03.725 "zone_append": false, 00:28:03.725 "compare": false, 00:28:03.725 "compare_and_write": false, 00:28:03.725 "abort": false, 00:28:03.725 "seek_hole": false, 00:28:03.725 "seek_data": false, 00:28:03.725 "copy": false, 00:28:03.725 "nvme_iov_md": false 00:28:03.725 }, 00:28:03.725 "memory_domains": [ 00:28:03.725 { 00:28:03.725 "dma_device_id": "system", 00:28:03.725 "dma_device_type": 1 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.725 "dma_device_type": 2 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "system", 00:28:03.725 "dma_device_type": 1 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.725 "dma_device_type": 2 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "system", 00:28:03.725 "dma_device_type": 1 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.725 "dma_device_type": 2 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "system", 00:28:03.725 "dma_device_type": 1 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.725 "dma_device_type": 2 00:28:03.725 } 00:28:03.725 ], 00:28:03.725 "driver_specific": { 00:28:03.725 "raid": { 00:28:03.725 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:03.725 "strip_size_kb": 0, 00:28:03.725 "state": "online", 00:28:03.725 "raid_level": "raid1", 00:28:03.725 "superblock": true, 00:28:03.725 "num_base_bdevs": 4, 00:28:03.725 "num_base_bdevs_discovered": 4, 00:28:03.725 "num_base_bdevs_operational": 4, 00:28:03.725 "base_bdevs_list": [ 00:28:03.725 { 00:28:03.725 "name": "BaseBdev1", 00:28:03.725 "uuid": "8fa7566c-70ab-45db-b2ff-42f88427796d", 00:28:03.725 "is_configured": true, 00:28:03.725 "data_offset": 2048, 00:28:03.725 "data_size": 63488 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "name": "BaseBdev2", 00:28:03.725 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:03.725 "is_configured": true, 00:28:03.725 "data_offset": 2048, 00:28:03.725 "data_size": 63488 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "name": "BaseBdev3", 00:28:03.725 "uuid": "94d46df0-e774-4d9c-ab48-4afefff12dbd", 00:28:03.725 "is_configured": true, 00:28:03.725 "data_offset": 2048, 00:28:03.725 "data_size": 63488 00:28:03.725 }, 00:28:03.725 { 00:28:03.725 "name": "BaseBdev4", 00:28:03.725 "uuid": "694867e2-70df-4c5a-95fc-eecc834bbade", 00:28:03.725 "is_configured": true, 00:28:03.725 "data_offset": 2048, 00:28:03.725 "data_size": 63488 00:28:03.725 } 00:28:03.725 ] 00:28:03.725 } 00:28:03.725 } 00:28:03.725 }' 00:28:03.725 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:03.985 BaseBdev2 00:28:03.985 BaseBdev3 00:28:03.985 BaseBdev4' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 [2024-11-26 17:24:34.058568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.245 "name": "Existed_Raid", 00:28:04.245 "uuid": "fb0c2b94-ff98-4ef1-80af-09844fe8349e", 00:28:04.245 "strip_size_kb": 0, 00:28:04.245 "state": "online", 00:28:04.245 "raid_level": "raid1", 00:28:04.245 "superblock": true, 00:28:04.245 "num_base_bdevs": 4, 00:28:04.245 "num_base_bdevs_discovered": 3, 00:28:04.245 "num_base_bdevs_operational": 3, 00:28:04.245 "base_bdevs_list": [ 00:28:04.245 { 00:28:04.245 "name": null, 00:28:04.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.245 "is_configured": false, 00:28:04.245 "data_offset": 0, 00:28:04.245 "data_size": 63488 00:28:04.245 }, 00:28:04.245 { 00:28:04.245 "name": "BaseBdev2", 00:28:04.245 "uuid": "58d0680b-a123-4d88-8b09-c490083a548c", 00:28:04.245 "is_configured": true, 00:28:04.245 "data_offset": 2048, 00:28:04.245 "data_size": 63488 00:28:04.245 }, 00:28:04.245 { 00:28:04.245 "name": "BaseBdev3", 00:28:04.245 "uuid": "94d46df0-e774-4d9c-ab48-4afefff12dbd", 00:28:04.245 "is_configured": true, 00:28:04.245 "data_offset": 2048, 00:28:04.245 "data_size": 63488 00:28:04.245 }, 00:28:04.245 { 00:28:04.245 "name": "BaseBdev4", 00:28:04.245 "uuid": "694867e2-70df-4c5a-95fc-eecc834bbade", 00:28:04.245 "is_configured": true, 00:28:04.245 "data_offset": 2048, 00:28:04.245 "data_size": 63488 00:28:04.245 } 00:28:04.245 ] 00:28:04.245 }' 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.245 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.504 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.504 [2024-11-26 17:24:34.610024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.762 [2024-11-26 17:24:34.761099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:04.762 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.021 17:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.022 [2024-11-26 17:24:34.914586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:05.022 [2024-11-26 17:24:34.914713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:05.022 [2024-11-26 17:24:35.011945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.022 [2024-11-26 17:24:35.012025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.022 [2024-11-26 17:24:35.012041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.022 BaseBdev2 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.022 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.022 [ 00:28:05.022 { 00:28:05.022 "name": "BaseBdev2", 00:28:05.022 "aliases": [ 00:28:05.022 "3b2f1960-f809-4d32-8f66-34aadb015ef1" 00:28:05.022 ], 00:28:05.022 "product_name": "Malloc disk", 00:28:05.022 "block_size": 512, 00:28:05.022 "num_blocks": 65536, 00:28:05.022 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:05.022 "assigned_rate_limits": { 00:28:05.022 "rw_ios_per_sec": 0, 00:28:05.022 "rw_mbytes_per_sec": 0, 00:28:05.022 "r_mbytes_per_sec": 0, 00:28:05.022 "w_mbytes_per_sec": 0 00:28:05.022 }, 00:28:05.022 "claimed": false, 00:28:05.022 "zoned": false, 00:28:05.022 "supported_io_types": { 00:28:05.022 "read": true, 00:28:05.022 "write": true, 00:28:05.022 "unmap": true, 00:28:05.022 "flush": true, 00:28:05.022 "reset": true, 00:28:05.022 "nvme_admin": false, 00:28:05.282 "nvme_io": false, 00:28:05.282 "nvme_io_md": false, 00:28:05.282 "write_zeroes": true, 00:28:05.282 "zcopy": true, 00:28:05.282 "get_zone_info": false, 00:28:05.282 "zone_management": false, 00:28:05.282 "zone_append": false, 00:28:05.282 "compare": false, 00:28:05.282 "compare_and_write": false, 00:28:05.282 "abort": true, 00:28:05.282 "seek_hole": false, 00:28:05.282 "seek_data": false, 00:28:05.282 "copy": true, 00:28:05.282 "nvme_iov_md": false 00:28:05.282 }, 00:28:05.282 "memory_domains": [ 00:28:05.282 { 00:28:05.282 "dma_device_id": "system", 00:28:05.282 "dma_device_type": 1 00:28:05.282 }, 00:28:05.282 { 00:28:05.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.282 "dma_device_type": 2 00:28:05.282 } 00:28:05.282 ], 00:28:05.282 "driver_specific": {} 00:28:05.282 } 00:28:05.282 ] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 BaseBdev3 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 [ 00:28:05.282 { 00:28:05.282 "name": "BaseBdev3", 00:28:05.282 "aliases": [ 00:28:05.282 "6040d709-1a0d-455d-b18e-6706e3b9ef31" 00:28:05.282 ], 00:28:05.282 "product_name": "Malloc disk", 00:28:05.282 "block_size": 512, 00:28:05.282 "num_blocks": 65536, 00:28:05.282 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:05.282 "assigned_rate_limits": { 00:28:05.282 "rw_ios_per_sec": 0, 00:28:05.282 "rw_mbytes_per_sec": 0, 00:28:05.282 "r_mbytes_per_sec": 0, 00:28:05.282 "w_mbytes_per_sec": 0 00:28:05.282 }, 00:28:05.282 "claimed": false, 00:28:05.282 "zoned": false, 00:28:05.282 "supported_io_types": { 00:28:05.282 "read": true, 00:28:05.282 "write": true, 00:28:05.282 "unmap": true, 00:28:05.282 "flush": true, 00:28:05.282 "reset": true, 00:28:05.282 "nvme_admin": false, 00:28:05.282 "nvme_io": false, 00:28:05.282 "nvme_io_md": false, 00:28:05.282 "write_zeroes": true, 00:28:05.282 "zcopy": true, 00:28:05.282 "get_zone_info": false, 00:28:05.282 "zone_management": false, 00:28:05.282 "zone_append": false, 00:28:05.282 "compare": false, 00:28:05.282 "compare_and_write": false, 00:28:05.282 "abort": true, 00:28:05.282 "seek_hole": false, 00:28:05.282 "seek_data": false, 00:28:05.282 "copy": true, 00:28:05.282 "nvme_iov_md": false 00:28:05.282 }, 00:28:05.282 "memory_domains": [ 00:28:05.282 { 00:28:05.282 "dma_device_id": "system", 00:28:05.282 "dma_device_type": 1 00:28:05.282 }, 00:28:05.282 { 00:28:05.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.282 "dma_device_type": 2 00:28:05.282 } 00:28:05.282 ], 00:28:05.282 "driver_specific": {} 00:28:05.282 } 00:28:05.282 ] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 BaseBdev4 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.282 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.282 [ 00:28:05.282 { 00:28:05.282 "name": "BaseBdev4", 00:28:05.282 "aliases": [ 00:28:05.282 "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476" 00:28:05.282 ], 00:28:05.282 "product_name": "Malloc disk", 00:28:05.282 "block_size": 512, 00:28:05.282 "num_blocks": 65536, 00:28:05.283 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:05.283 "assigned_rate_limits": { 00:28:05.283 "rw_ios_per_sec": 0, 00:28:05.283 "rw_mbytes_per_sec": 0, 00:28:05.283 "r_mbytes_per_sec": 0, 00:28:05.283 "w_mbytes_per_sec": 0 00:28:05.283 }, 00:28:05.283 "claimed": false, 00:28:05.283 "zoned": false, 00:28:05.283 "supported_io_types": { 00:28:05.283 "read": true, 00:28:05.283 "write": true, 00:28:05.283 "unmap": true, 00:28:05.283 "flush": true, 00:28:05.283 "reset": true, 00:28:05.283 "nvme_admin": false, 00:28:05.283 "nvme_io": false, 00:28:05.283 "nvme_io_md": false, 00:28:05.283 "write_zeroes": true, 00:28:05.283 "zcopy": true, 00:28:05.283 "get_zone_info": false, 00:28:05.283 "zone_management": false, 00:28:05.283 "zone_append": false, 00:28:05.283 "compare": false, 00:28:05.283 "compare_and_write": false, 00:28:05.283 "abort": true, 00:28:05.283 "seek_hole": false, 00:28:05.283 "seek_data": false, 00:28:05.283 "copy": true, 00:28:05.283 "nvme_iov_md": false 00:28:05.283 }, 00:28:05.283 "memory_domains": [ 00:28:05.283 { 00:28:05.283 "dma_device_id": "system", 00:28:05.283 "dma_device_type": 1 00:28:05.283 }, 00:28:05.283 { 00:28:05.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.283 "dma_device_type": 2 00:28:05.283 } 00:28:05.283 ], 00:28:05.283 "driver_specific": {} 00:28:05.283 } 00:28:05.283 ] 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.283 [2024-11-26 17:24:35.333594] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:05.283 [2024-11-26 17:24:35.333652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:05.283 [2024-11-26 17:24:35.333681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:05.283 [2024-11-26 17:24:35.336043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:05.283 [2024-11-26 17:24:35.336098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.283 "name": "Existed_Raid", 00:28:05.283 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:05.283 "strip_size_kb": 0, 00:28:05.283 "state": "configuring", 00:28:05.283 "raid_level": "raid1", 00:28:05.283 "superblock": true, 00:28:05.283 "num_base_bdevs": 4, 00:28:05.283 "num_base_bdevs_discovered": 3, 00:28:05.283 "num_base_bdevs_operational": 4, 00:28:05.283 "base_bdevs_list": [ 00:28:05.283 { 00:28:05.283 "name": "BaseBdev1", 00:28:05.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.283 "is_configured": false, 00:28:05.283 "data_offset": 0, 00:28:05.283 "data_size": 0 00:28:05.283 }, 00:28:05.283 { 00:28:05.283 "name": "BaseBdev2", 00:28:05.283 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:05.283 "is_configured": true, 00:28:05.283 "data_offset": 2048, 00:28:05.283 "data_size": 63488 00:28:05.283 }, 00:28:05.283 { 00:28:05.283 "name": "BaseBdev3", 00:28:05.283 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:05.283 "is_configured": true, 00:28:05.283 "data_offset": 2048, 00:28:05.283 "data_size": 63488 00:28:05.283 }, 00:28:05.283 { 00:28:05.283 "name": "BaseBdev4", 00:28:05.283 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:05.283 "is_configured": true, 00:28:05.283 "data_offset": 2048, 00:28:05.283 "data_size": 63488 00:28:05.283 } 00:28:05.283 ] 00:28:05.283 }' 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.283 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.852 [2024-11-26 17:24:35.768943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.852 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.853 "name": "Existed_Raid", 00:28:05.853 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:05.853 "strip_size_kb": 0, 00:28:05.853 "state": "configuring", 00:28:05.853 "raid_level": "raid1", 00:28:05.853 "superblock": true, 00:28:05.853 "num_base_bdevs": 4, 00:28:05.853 "num_base_bdevs_discovered": 2, 00:28:05.853 "num_base_bdevs_operational": 4, 00:28:05.853 "base_bdevs_list": [ 00:28:05.853 { 00:28:05.853 "name": "BaseBdev1", 00:28:05.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.853 "is_configured": false, 00:28:05.853 "data_offset": 0, 00:28:05.853 "data_size": 0 00:28:05.853 }, 00:28:05.853 { 00:28:05.853 "name": null, 00:28:05.853 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:05.853 "is_configured": false, 00:28:05.853 "data_offset": 0, 00:28:05.853 "data_size": 63488 00:28:05.853 }, 00:28:05.853 { 00:28:05.853 "name": "BaseBdev3", 00:28:05.853 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:05.853 "is_configured": true, 00:28:05.853 "data_offset": 2048, 00:28:05.853 "data_size": 63488 00:28:05.853 }, 00:28:05.853 { 00:28:05.853 "name": "BaseBdev4", 00:28:05.853 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:05.853 "is_configured": true, 00:28:05.853 "data_offset": 2048, 00:28:05.853 "data_size": 63488 00:28:05.853 } 00:28:05.853 ] 00:28:05.853 }' 00:28:05.853 17:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.853 17:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.112 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.112 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.112 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.112 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:06.112 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.371 [2024-11-26 17:24:36.297875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:06.371 BaseBdev1 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:06.371 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.372 [ 00:28:06.372 { 00:28:06.372 "name": "BaseBdev1", 00:28:06.372 "aliases": [ 00:28:06.372 "a0cb6977-836e-4c1b-b828-fd83c20faa9e" 00:28:06.372 ], 00:28:06.372 "product_name": "Malloc disk", 00:28:06.372 "block_size": 512, 00:28:06.372 "num_blocks": 65536, 00:28:06.372 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:06.372 "assigned_rate_limits": { 00:28:06.372 "rw_ios_per_sec": 0, 00:28:06.372 "rw_mbytes_per_sec": 0, 00:28:06.372 "r_mbytes_per_sec": 0, 00:28:06.372 "w_mbytes_per_sec": 0 00:28:06.372 }, 00:28:06.372 "claimed": true, 00:28:06.372 "claim_type": "exclusive_write", 00:28:06.372 "zoned": false, 00:28:06.372 "supported_io_types": { 00:28:06.372 "read": true, 00:28:06.372 "write": true, 00:28:06.372 "unmap": true, 00:28:06.372 "flush": true, 00:28:06.372 "reset": true, 00:28:06.372 "nvme_admin": false, 00:28:06.372 "nvme_io": false, 00:28:06.372 "nvme_io_md": false, 00:28:06.372 "write_zeroes": true, 00:28:06.372 "zcopy": true, 00:28:06.372 "get_zone_info": false, 00:28:06.372 "zone_management": false, 00:28:06.372 "zone_append": false, 00:28:06.372 "compare": false, 00:28:06.372 "compare_and_write": false, 00:28:06.372 "abort": true, 00:28:06.372 "seek_hole": false, 00:28:06.372 "seek_data": false, 00:28:06.372 "copy": true, 00:28:06.372 "nvme_iov_md": false 00:28:06.372 }, 00:28:06.372 "memory_domains": [ 00:28:06.372 { 00:28:06.372 "dma_device_id": "system", 00:28:06.372 "dma_device_type": 1 00:28:06.372 }, 00:28:06.372 { 00:28:06.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.372 "dma_device_type": 2 00:28:06.372 } 00:28:06.372 ], 00:28:06.372 "driver_specific": {} 00:28:06.372 } 00:28:06.372 ] 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.372 "name": "Existed_Raid", 00:28:06.372 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:06.372 "strip_size_kb": 0, 00:28:06.372 "state": "configuring", 00:28:06.372 "raid_level": "raid1", 00:28:06.372 "superblock": true, 00:28:06.372 "num_base_bdevs": 4, 00:28:06.372 "num_base_bdevs_discovered": 3, 00:28:06.372 "num_base_bdevs_operational": 4, 00:28:06.372 "base_bdevs_list": [ 00:28:06.372 { 00:28:06.372 "name": "BaseBdev1", 00:28:06.372 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:06.372 "is_configured": true, 00:28:06.372 "data_offset": 2048, 00:28:06.372 "data_size": 63488 00:28:06.372 }, 00:28:06.372 { 00:28:06.372 "name": null, 00:28:06.372 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:06.372 "is_configured": false, 00:28:06.372 "data_offset": 0, 00:28:06.372 "data_size": 63488 00:28:06.372 }, 00:28:06.372 { 00:28:06.372 "name": "BaseBdev3", 00:28:06.372 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:06.372 "is_configured": true, 00:28:06.372 "data_offset": 2048, 00:28:06.372 "data_size": 63488 00:28:06.372 }, 00:28:06.372 { 00:28:06.372 "name": "BaseBdev4", 00:28:06.372 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:06.372 "is_configured": true, 00:28:06.372 "data_offset": 2048, 00:28:06.372 "data_size": 63488 00:28:06.372 } 00:28:06.372 ] 00:28:06.372 }' 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.372 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.940 [2024-11-26 17:24:36.845796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.940 "name": "Existed_Raid", 00:28:06.940 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:06.940 "strip_size_kb": 0, 00:28:06.940 "state": "configuring", 00:28:06.940 "raid_level": "raid1", 00:28:06.940 "superblock": true, 00:28:06.940 "num_base_bdevs": 4, 00:28:06.940 "num_base_bdevs_discovered": 2, 00:28:06.940 "num_base_bdevs_operational": 4, 00:28:06.940 "base_bdevs_list": [ 00:28:06.940 { 00:28:06.940 "name": "BaseBdev1", 00:28:06.940 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:06.940 "is_configured": true, 00:28:06.940 "data_offset": 2048, 00:28:06.940 "data_size": 63488 00:28:06.940 }, 00:28:06.940 { 00:28:06.940 "name": null, 00:28:06.940 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:06.940 "is_configured": false, 00:28:06.940 "data_offset": 0, 00:28:06.940 "data_size": 63488 00:28:06.940 }, 00:28:06.940 { 00:28:06.940 "name": null, 00:28:06.940 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:06.940 "is_configured": false, 00:28:06.940 "data_offset": 0, 00:28:06.940 "data_size": 63488 00:28:06.940 }, 00:28:06.940 { 00:28:06.940 "name": "BaseBdev4", 00:28:06.940 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:06.940 "is_configured": true, 00:28:06.940 "data_offset": 2048, 00:28:06.940 "data_size": 63488 00:28:06.940 } 00:28:06.940 ] 00:28:06.940 }' 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.940 17:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.200 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.200 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.200 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.200 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:07.200 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.460 [2024-11-26 17:24:37.341748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.460 "name": "Existed_Raid", 00:28:07.460 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:07.460 "strip_size_kb": 0, 00:28:07.460 "state": "configuring", 00:28:07.460 "raid_level": "raid1", 00:28:07.460 "superblock": true, 00:28:07.460 "num_base_bdevs": 4, 00:28:07.460 "num_base_bdevs_discovered": 3, 00:28:07.460 "num_base_bdevs_operational": 4, 00:28:07.460 "base_bdevs_list": [ 00:28:07.460 { 00:28:07.460 "name": "BaseBdev1", 00:28:07.460 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:07.460 "is_configured": true, 00:28:07.460 "data_offset": 2048, 00:28:07.460 "data_size": 63488 00:28:07.460 }, 00:28:07.460 { 00:28:07.460 "name": null, 00:28:07.460 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:07.460 "is_configured": false, 00:28:07.460 "data_offset": 0, 00:28:07.460 "data_size": 63488 00:28:07.460 }, 00:28:07.460 { 00:28:07.460 "name": "BaseBdev3", 00:28:07.460 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:07.460 "is_configured": true, 00:28:07.460 "data_offset": 2048, 00:28:07.460 "data_size": 63488 00:28:07.460 }, 00:28:07.460 { 00:28:07.460 "name": "BaseBdev4", 00:28:07.460 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:07.460 "is_configured": true, 00:28:07.460 "data_offset": 2048, 00:28:07.460 "data_size": 63488 00:28:07.460 } 00:28:07.460 ] 00:28:07.460 }' 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.460 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.719 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.719 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.719 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.719 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:07.719 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.979 [2024-11-26 17:24:37.861815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.979 17:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.979 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.979 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.979 "name": "Existed_Raid", 00:28:07.979 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:07.979 "strip_size_kb": 0, 00:28:07.979 "state": "configuring", 00:28:07.979 "raid_level": "raid1", 00:28:07.979 "superblock": true, 00:28:07.979 "num_base_bdevs": 4, 00:28:07.979 "num_base_bdevs_discovered": 2, 00:28:07.979 "num_base_bdevs_operational": 4, 00:28:07.979 "base_bdevs_list": [ 00:28:07.979 { 00:28:07.979 "name": null, 00:28:07.979 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:07.979 "is_configured": false, 00:28:07.979 "data_offset": 0, 00:28:07.979 "data_size": 63488 00:28:07.979 }, 00:28:07.979 { 00:28:07.979 "name": null, 00:28:07.979 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:07.979 "is_configured": false, 00:28:07.979 "data_offset": 0, 00:28:07.979 "data_size": 63488 00:28:07.979 }, 00:28:07.979 { 00:28:07.979 "name": "BaseBdev3", 00:28:07.979 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:07.979 "is_configured": true, 00:28:07.979 "data_offset": 2048, 00:28:07.979 "data_size": 63488 00:28:07.979 }, 00:28:07.979 { 00:28:07.979 "name": "BaseBdev4", 00:28:07.979 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:07.979 "is_configured": true, 00:28:07.979 "data_offset": 2048, 00:28:07.979 "data_size": 63488 00:28:07.979 } 00:28:07.979 ] 00:28:07.979 }' 00:28:07.979 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.979 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.547 [2024-11-26 17:24:38.459009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.547 "name": "Existed_Raid", 00:28:08.547 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:08.547 "strip_size_kb": 0, 00:28:08.547 "state": "configuring", 00:28:08.547 "raid_level": "raid1", 00:28:08.547 "superblock": true, 00:28:08.547 "num_base_bdevs": 4, 00:28:08.547 "num_base_bdevs_discovered": 3, 00:28:08.547 "num_base_bdevs_operational": 4, 00:28:08.547 "base_bdevs_list": [ 00:28:08.547 { 00:28:08.547 "name": null, 00:28:08.547 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:08.547 "is_configured": false, 00:28:08.547 "data_offset": 0, 00:28:08.547 "data_size": 63488 00:28:08.547 }, 00:28:08.547 { 00:28:08.547 "name": "BaseBdev2", 00:28:08.547 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:08.547 "is_configured": true, 00:28:08.547 "data_offset": 2048, 00:28:08.547 "data_size": 63488 00:28:08.547 }, 00:28:08.547 { 00:28:08.547 "name": "BaseBdev3", 00:28:08.547 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:08.547 "is_configured": true, 00:28:08.547 "data_offset": 2048, 00:28:08.547 "data_size": 63488 00:28:08.547 }, 00:28:08.547 { 00:28:08.547 "name": "BaseBdev4", 00:28:08.547 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:08.547 "is_configured": true, 00:28:08.547 "data_offset": 2048, 00:28:08.547 "data_size": 63488 00:28:08.547 } 00:28:08.547 ] 00:28:08.547 }' 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.547 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.807 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.807 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.807 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.807 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:08.807 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a0cb6977-836e-4c1b-b828-fd83c20faa9e 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.066 17:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.066 [2024-11-26 17:24:39.010621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:09.066 [2024-11-26 17:24:39.010876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:09.066 [2024-11-26 17:24:39.010898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:09.066 [2024-11-26 17:24:39.011183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:09.066 NewBaseBdev 00:28:09.066 [2024-11-26 17:24:39.011351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:09.066 [2024-11-26 17:24:39.011362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:09.067 [2024-11-26 17:24:39.011512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.067 [ 00:28:09.067 { 00:28:09.067 "name": "NewBaseBdev", 00:28:09.067 "aliases": [ 00:28:09.067 "a0cb6977-836e-4c1b-b828-fd83c20faa9e" 00:28:09.067 ], 00:28:09.067 "product_name": "Malloc disk", 00:28:09.067 "block_size": 512, 00:28:09.067 "num_blocks": 65536, 00:28:09.067 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:09.067 "assigned_rate_limits": { 00:28:09.067 "rw_ios_per_sec": 0, 00:28:09.067 "rw_mbytes_per_sec": 0, 00:28:09.067 "r_mbytes_per_sec": 0, 00:28:09.067 "w_mbytes_per_sec": 0 00:28:09.067 }, 00:28:09.067 "claimed": true, 00:28:09.067 "claim_type": "exclusive_write", 00:28:09.067 "zoned": false, 00:28:09.067 "supported_io_types": { 00:28:09.067 "read": true, 00:28:09.067 "write": true, 00:28:09.067 "unmap": true, 00:28:09.067 "flush": true, 00:28:09.067 "reset": true, 00:28:09.067 "nvme_admin": false, 00:28:09.067 "nvme_io": false, 00:28:09.067 "nvme_io_md": false, 00:28:09.067 "write_zeroes": true, 00:28:09.067 "zcopy": true, 00:28:09.067 "get_zone_info": false, 00:28:09.067 "zone_management": false, 00:28:09.067 "zone_append": false, 00:28:09.067 "compare": false, 00:28:09.067 "compare_and_write": false, 00:28:09.067 "abort": true, 00:28:09.067 "seek_hole": false, 00:28:09.067 "seek_data": false, 00:28:09.067 "copy": true, 00:28:09.067 "nvme_iov_md": false 00:28:09.067 }, 00:28:09.067 "memory_domains": [ 00:28:09.067 { 00:28:09.067 "dma_device_id": "system", 00:28:09.067 "dma_device_type": 1 00:28:09.067 }, 00:28:09.067 { 00:28:09.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.067 "dma_device_type": 2 00:28:09.067 } 00:28:09.067 ], 00:28:09.067 "driver_specific": {} 00:28:09.067 } 00:28:09.067 ] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.067 "name": "Existed_Raid", 00:28:09.067 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:09.067 "strip_size_kb": 0, 00:28:09.067 "state": "online", 00:28:09.067 "raid_level": "raid1", 00:28:09.067 "superblock": true, 00:28:09.067 "num_base_bdevs": 4, 00:28:09.067 "num_base_bdevs_discovered": 4, 00:28:09.067 "num_base_bdevs_operational": 4, 00:28:09.067 "base_bdevs_list": [ 00:28:09.067 { 00:28:09.067 "name": "NewBaseBdev", 00:28:09.067 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:09.067 "is_configured": true, 00:28:09.067 "data_offset": 2048, 00:28:09.067 "data_size": 63488 00:28:09.067 }, 00:28:09.067 { 00:28:09.067 "name": "BaseBdev2", 00:28:09.067 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:09.067 "is_configured": true, 00:28:09.067 "data_offset": 2048, 00:28:09.067 "data_size": 63488 00:28:09.067 }, 00:28:09.067 { 00:28:09.067 "name": "BaseBdev3", 00:28:09.067 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:09.067 "is_configured": true, 00:28:09.067 "data_offset": 2048, 00:28:09.067 "data_size": 63488 00:28:09.067 }, 00:28:09.067 { 00:28:09.067 "name": "BaseBdev4", 00:28:09.067 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:09.067 "is_configured": true, 00:28:09.067 "data_offset": 2048, 00:28:09.067 "data_size": 63488 00:28:09.067 } 00:28:09.067 ] 00:28:09.067 }' 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.067 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.326 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.585 [2024-11-26 17:24:39.442507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.585 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.585 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.585 "name": "Existed_Raid", 00:28:09.585 "aliases": [ 00:28:09.585 "76d5e400-6996-4700-a504-3081355c0a41" 00:28:09.585 ], 00:28:09.585 "product_name": "Raid Volume", 00:28:09.585 "block_size": 512, 00:28:09.585 "num_blocks": 63488, 00:28:09.585 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:09.585 "assigned_rate_limits": { 00:28:09.585 "rw_ios_per_sec": 0, 00:28:09.585 "rw_mbytes_per_sec": 0, 00:28:09.585 "r_mbytes_per_sec": 0, 00:28:09.585 "w_mbytes_per_sec": 0 00:28:09.585 }, 00:28:09.585 "claimed": false, 00:28:09.585 "zoned": false, 00:28:09.585 "supported_io_types": { 00:28:09.585 "read": true, 00:28:09.585 "write": true, 00:28:09.585 "unmap": false, 00:28:09.585 "flush": false, 00:28:09.585 "reset": true, 00:28:09.585 "nvme_admin": false, 00:28:09.585 "nvme_io": false, 00:28:09.585 "nvme_io_md": false, 00:28:09.585 "write_zeroes": true, 00:28:09.585 "zcopy": false, 00:28:09.585 "get_zone_info": false, 00:28:09.585 "zone_management": false, 00:28:09.585 "zone_append": false, 00:28:09.585 "compare": false, 00:28:09.585 "compare_and_write": false, 00:28:09.585 "abort": false, 00:28:09.585 "seek_hole": false, 00:28:09.585 "seek_data": false, 00:28:09.585 "copy": false, 00:28:09.585 "nvme_iov_md": false 00:28:09.585 }, 00:28:09.585 "memory_domains": [ 00:28:09.585 { 00:28:09.585 "dma_device_id": "system", 00:28:09.585 "dma_device_type": 1 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.585 "dma_device_type": 2 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "system", 00:28:09.585 "dma_device_type": 1 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.585 "dma_device_type": 2 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "system", 00:28:09.585 "dma_device_type": 1 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.585 "dma_device_type": 2 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "system", 00:28:09.585 "dma_device_type": 1 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.585 "dma_device_type": 2 00:28:09.585 } 00:28:09.585 ], 00:28:09.585 "driver_specific": { 00:28:09.585 "raid": { 00:28:09.585 "uuid": "76d5e400-6996-4700-a504-3081355c0a41", 00:28:09.585 "strip_size_kb": 0, 00:28:09.585 "state": "online", 00:28:09.585 "raid_level": "raid1", 00:28:09.585 "superblock": true, 00:28:09.585 "num_base_bdevs": 4, 00:28:09.585 "num_base_bdevs_discovered": 4, 00:28:09.585 "num_base_bdevs_operational": 4, 00:28:09.585 "base_bdevs_list": [ 00:28:09.585 { 00:28:09.585 "name": "NewBaseBdev", 00:28:09.585 "uuid": "a0cb6977-836e-4c1b-b828-fd83c20faa9e", 00:28:09.585 "is_configured": true, 00:28:09.585 "data_offset": 2048, 00:28:09.585 "data_size": 63488 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "name": "BaseBdev2", 00:28:09.585 "uuid": "3b2f1960-f809-4d32-8f66-34aadb015ef1", 00:28:09.585 "is_configured": true, 00:28:09.585 "data_offset": 2048, 00:28:09.585 "data_size": 63488 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "name": "BaseBdev3", 00:28:09.585 "uuid": "6040d709-1a0d-455d-b18e-6706e3b9ef31", 00:28:09.585 "is_configured": true, 00:28:09.585 "data_offset": 2048, 00:28:09.585 "data_size": 63488 00:28:09.585 }, 00:28:09.585 { 00:28:09.585 "name": "BaseBdev4", 00:28:09.585 "uuid": "7b0be0df-38f0-44f4-a1c8-1aa14f5ad476", 00:28:09.585 "is_configured": true, 00:28:09.585 "data_offset": 2048, 00:28:09.585 "data_size": 63488 00:28:09.585 } 00:28:09.585 ] 00:28:09.585 } 00:28:09.585 } 00:28:09.585 }' 00:28:09.585 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.585 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:09.585 BaseBdev2 00:28:09.585 BaseBdev3 00:28:09.585 BaseBdev4' 00:28:09.585 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.586 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.845 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.845 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.846 [2024-11-26 17:24:39.757681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:09.846 [2024-11-26 17:24:39.757717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:09.846 [2024-11-26 17:24:39.757814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:09.846 [2024-11-26 17:24:39.758150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:09.846 [2024-11-26 17:24:39.758168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73957 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73957 ']' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73957 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73957 00:28:09.846 killing process with pid 73957 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73957' 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73957 00:28:09.846 [2024-11-26 17:24:39.807873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:09.846 17:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73957 00:28:10.415 [2024-11-26 17:24:40.235761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:11.354 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:11.354 00:28:11.354 real 0m11.377s 00:28:11.354 user 0m17.826s 00:28:11.354 sys 0m2.404s 00:28:11.354 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.354 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.354 ************************************ 00:28:11.354 END TEST raid_state_function_test_sb 00:28:11.354 ************************************ 00:28:11.615 17:24:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:28:11.615 17:24:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:11.615 17:24:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.615 17:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:11.615 ************************************ 00:28:11.615 START TEST raid_superblock_test 00:28:11.615 ************************************ 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74627 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74627 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74627 ']' 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.615 17:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.615 [2024-11-26 17:24:41.638068] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:11.615 [2024-11-26 17:24:41.638219] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74627 ] 00:28:11.874 [2024-11-26 17:24:41.818034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.875 [2024-11-26 17:24:41.968429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.134 [2024-11-26 17:24:42.195557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.134 [2024-11-26 17:24:42.195628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.394 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.654 malloc1 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.654 [2024-11-26 17:24:42.548071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:12.654 [2024-11-26 17:24:42.548144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.654 [2024-11-26 17:24:42.548170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:12.654 [2024-11-26 17:24:42.548184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.654 [2024-11-26 17:24:42.550895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.654 [2024-11-26 17:24:42.550939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:12.654 pt1 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.654 malloc2 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.654 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 [2024-11-26 17:24:42.611275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:12.655 [2024-11-26 17:24:42.611487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.655 [2024-11-26 17:24:42.611598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:12.655 [2024-11-26 17:24:42.611744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.655 [2024-11-26 17:24:42.614761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.655 [2024-11-26 17:24:42.614928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:12.655 pt2 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 malloc3 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 [2024-11-26 17:24:42.684871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:12.655 [2024-11-26 17:24:42.685068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.655 [2024-11-26 17:24:42.685140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:12.655 [2024-11-26 17:24:42.685232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.655 [2024-11-26 17:24:42.688356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.655 [2024-11-26 17:24:42.688417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:12.655 pt3 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 malloc4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 [2024-11-26 17:24:42.744168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:12.655 [2024-11-26 17:24:42.744339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.655 [2024-11-26 17:24:42.744425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:12.655 [2024-11-26 17:24:42.744494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.655 [2024-11-26 17:24:42.747272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.655 [2024-11-26 17:24:42.747417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:12.655 pt4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.655 [2024-11-26 17:24:42.756269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:12.655 [2024-11-26 17:24:42.758670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:12.655 [2024-11-26 17:24:42.758735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:12.655 [2024-11-26 17:24:42.758799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:12.655 [2024-11-26 17:24:42.759013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:12.655 [2024-11-26 17:24:42.759031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:12.655 [2024-11-26 17:24:42.759315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:12.655 [2024-11-26 17:24:42.759483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:12.655 [2024-11-26 17:24:42.759501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:12.655 [2024-11-26 17:24:42.759687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.655 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.915 "name": "raid_bdev1", 00:28:12.915 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:12.915 "strip_size_kb": 0, 00:28:12.915 "state": "online", 00:28:12.915 "raid_level": "raid1", 00:28:12.915 "superblock": true, 00:28:12.915 "num_base_bdevs": 4, 00:28:12.915 "num_base_bdevs_discovered": 4, 00:28:12.915 "num_base_bdevs_operational": 4, 00:28:12.915 "base_bdevs_list": [ 00:28:12.915 { 00:28:12.915 "name": "pt1", 00:28:12.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:12.915 "is_configured": true, 00:28:12.915 "data_offset": 2048, 00:28:12.915 "data_size": 63488 00:28:12.915 }, 00:28:12.915 { 00:28:12.915 "name": "pt2", 00:28:12.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:12.915 "is_configured": true, 00:28:12.915 "data_offset": 2048, 00:28:12.915 "data_size": 63488 00:28:12.915 }, 00:28:12.915 { 00:28:12.915 "name": "pt3", 00:28:12.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:12.915 "is_configured": true, 00:28:12.915 "data_offset": 2048, 00:28:12.915 "data_size": 63488 00:28:12.915 }, 00:28:12.915 { 00:28:12.915 "name": "pt4", 00:28:12.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:12.915 "is_configured": true, 00:28:12.915 "data_offset": 2048, 00:28:12.915 "data_size": 63488 00:28:12.915 } 00:28:12.915 ] 00:28:12.915 }' 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.915 17:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.175 [2024-11-26 17:24:43.196019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:13.175 "name": "raid_bdev1", 00:28:13.175 "aliases": [ 00:28:13.175 "5582ad93-d835-4c06-8fa2-07c760a87c88" 00:28:13.175 ], 00:28:13.175 "product_name": "Raid Volume", 00:28:13.175 "block_size": 512, 00:28:13.175 "num_blocks": 63488, 00:28:13.175 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:13.175 "assigned_rate_limits": { 00:28:13.175 "rw_ios_per_sec": 0, 00:28:13.175 "rw_mbytes_per_sec": 0, 00:28:13.175 "r_mbytes_per_sec": 0, 00:28:13.175 "w_mbytes_per_sec": 0 00:28:13.175 }, 00:28:13.175 "claimed": false, 00:28:13.175 "zoned": false, 00:28:13.175 "supported_io_types": { 00:28:13.175 "read": true, 00:28:13.175 "write": true, 00:28:13.175 "unmap": false, 00:28:13.175 "flush": false, 00:28:13.175 "reset": true, 00:28:13.175 "nvme_admin": false, 00:28:13.175 "nvme_io": false, 00:28:13.175 "nvme_io_md": false, 00:28:13.175 "write_zeroes": true, 00:28:13.175 "zcopy": false, 00:28:13.175 "get_zone_info": false, 00:28:13.175 "zone_management": false, 00:28:13.175 "zone_append": false, 00:28:13.175 "compare": false, 00:28:13.175 "compare_and_write": false, 00:28:13.175 "abort": false, 00:28:13.175 "seek_hole": false, 00:28:13.175 "seek_data": false, 00:28:13.175 "copy": false, 00:28:13.175 "nvme_iov_md": false 00:28:13.175 }, 00:28:13.175 "memory_domains": [ 00:28:13.175 { 00:28:13.175 "dma_device_id": "system", 00:28:13.175 "dma_device_type": 1 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.175 "dma_device_type": 2 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "system", 00:28:13.175 "dma_device_type": 1 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.175 "dma_device_type": 2 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "system", 00:28:13.175 "dma_device_type": 1 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.175 "dma_device_type": 2 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "system", 00:28:13.175 "dma_device_type": 1 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.175 "dma_device_type": 2 00:28:13.175 } 00:28:13.175 ], 00:28:13.175 "driver_specific": { 00:28:13.175 "raid": { 00:28:13.175 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:13.175 "strip_size_kb": 0, 00:28:13.175 "state": "online", 00:28:13.175 "raid_level": "raid1", 00:28:13.175 "superblock": true, 00:28:13.175 "num_base_bdevs": 4, 00:28:13.175 "num_base_bdevs_discovered": 4, 00:28:13.175 "num_base_bdevs_operational": 4, 00:28:13.175 "base_bdevs_list": [ 00:28:13.175 { 00:28:13.175 "name": "pt1", 00:28:13.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:13.175 "is_configured": true, 00:28:13.175 "data_offset": 2048, 00:28:13.175 "data_size": 63488 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "name": "pt2", 00:28:13.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.175 "is_configured": true, 00:28:13.175 "data_offset": 2048, 00:28:13.175 "data_size": 63488 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "name": "pt3", 00:28:13.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:13.175 "is_configured": true, 00:28:13.175 "data_offset": 2048, 00:28:13.175 "data_size": 63488 00:28:13.175 }, 00:28:13.175 { 00:28:13.175 "name": "pt4", 00:28:13.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:13.175 "is_configured": true, 00:28:13.175 "data_offset": 2048, 00:28:13.175 "data_size": 63488 00:28:13.175 } 00:28:13.175 ] 00:28:13.175 } 00:28:13.175 } 00:28:13.175 }' 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:13.175 pt2 00:28:13.175 pt3 00:28:13.175 pt4' 00:28:13.175 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:13.435 [2024-11-26 17:24:43.523446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.435 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.694 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5582ad93-d835-4c06-8fa2-07c760a87c88 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5582ad93-d835-4c06-8fa2-07c760a87c88 ']' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 [2024-11-26 17:24:43.571119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:13.695 [2024-11-26 17:24:43.571155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:13.695 [2024-11-26 17:24:43.571259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.695 [2024-11-26 17:24:43.571357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.695 [2024-11-26 17:24:43.571378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.695 [2024-11-26 17:24:43.742915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:13.695 [2024-11-26 17:24:43.745298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:13.695 [2024-11-26 17:24:43.745353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:13.695 [2024-11-26 17:24:43.745393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:13.695 [2024-11-26 17:24:43.745447] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:13.695 [2024-11-26 17:24:43.745537] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:13.695 [2024-11-26 17:24:43.745581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:13.695 [2024-11-26 17:24:43.745605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:28:13.695 [2024-11-26 17:24:43.745648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:13.695 [2024-11-26 17:24:43.745663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:13.695 request: 00:28:13.695 { 00:28:13.695 "name": "raid_bdev1", 00:28:13.695 "raid_level": "raid1", 00:28:13.695 "base_bdevs": [ 00:28:13.695 "malloc1", 00:28:13.695 "malloc2", 00:28:13.695 "malloc3", 00:28:13.695 "malloc4" 00:28:13.695 ], 00:28:13.695 "superblock": false, 00:28:13.695 "method": "bdev_raid_create", 00:28:13.695 "req_id": 1 00:28:13.695 } 00:28:13.695 Got JSON-RPC error response 00:28:13.695 response: 00:28:13.695 { 00:28:13.695 "code": -17, 00:28:13.695 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:13.695 } 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.695 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.696 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.696 [2024-11-26 17:24:43.802787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:13.696 [2024-11-26 17:24:43.802873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.696 [2024-11-26 17:24:43.802896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:13.696 [2024-11-26 17:24:43.802911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.696 [2024-11-26 17:24:43.805706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.696 [2024-11-26 17:24:43.805755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:13.696 [2024-11-26 17:24:43.805846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:13.696 [2024-11-26 17:24:43.805949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:13.955 pt1 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.955 "name": "raid_bdev1", 00:28:13.955 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:13.955 "strip_size_kb": 0, 00:28:13.955 "state": "configuring", 00:28:13.955 "raid_level": "raid1", 00:28:13.955 "superblock": true, 00:28:13.955 "num_base_bdevs": 4, 00:28:13.955 "num_base_bdevs_discovered": 1, 00:28:13.955 "num_base_bdevs_operational": 4, 00:28:13.955 "base_bdevs_list": [ 00:28:13.955 { 00:28:13.955 "name": "pt1", 00:28:13.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:13.955 "is_configured": true, 00:28:13.955 "data_offset": 2048, 00:28:13.955 "data_size": 63488 00:28:13.955 }, 00:28:13.955 { 00:28:13.955 "name": null, 00:28:13.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.955 "is_configured": false, 00:28:13.955 "data_offset": 2048, 00:28:13.955 "data_size": 63488 00:28:13.955 }, 00:28:13.955 { 00:28:13.955 "name": null, 00:28:13.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:13.955 "is_configured": false, 00:28:13.955 "data_offset": 2048, 00:28:13.955 "data_size": 63488 00:28:13.955 }, 00:28:13.955 { 00:28:13.955 "name": null, 00:28:13.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:13.955 "is_configured": false, 00:28:13.955 "data_offset": 2048, 00:28:13.955 "data_size": 63488 00:28:13.955 } 00:28:13.955 ] 00:28:13.955 }' 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.955 17:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.215 [2024-11-26 17:24:44.186304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.215 [2024-11-26 17:24:44.186570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.215 [2024-11-26 17:24:44.186639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:14.215 [2024-11-26 17:24:44.186774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.215 [2024-11-26 17:24:44.187323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.215 [2024-11-26 17:24:44.187469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.215 [2024-11-26 17:24:44.187675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:14.215 [2024-11-26 17:24:44.187795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.215 pt2 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.215 [2024-11-26 17:24:44.198256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.215 "name": "raid_bdev1", 00:28:14.215 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:14.215 "strip_size_kb": 0, 00:28:14.215 "state": "configuring", 00:28:14.215 "raid_level": "raid1", 00:28:14.215 "superblock": true, 00:28:14.215 "num_base_bdevs": 4, 00:28:14.215 "num_base_bdevs_discovered": 1, 00:28:14.215 "num_base_bdevs_operational": 4, 00:28:14.215 "base_bdevs_list": [ 00:28:14.215 { 00:28:14.215 "name": "pt1", 00:28:14.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:14.215 "is_configured": true, 00:28:14.215 "data_offset": 2048, 00:28:14.215 "data_size": 63488 00:28:14.215 }, 00:28:14.215 { 00:28:14.215 "name": null, 00:28:14.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.215 "is_configured": false, 00:28:14.215 "data_offset": 0, 00:28:14.215 "data_size": 63488 00:28:14.215 }, 00:28:14.215 { 00:28:14.215 "name": null, 00:28:14.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:14.215 "is_configured": false, 00:28:14.215 "data_offset": 2048, 00:28:14.215 "data_size": 63488 00:28:14.215 }, 00:28:14.215 { 00:28:14.215 "name": null, 00:28:14.215 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:14.215 "is_configured": false, 00:28:14.215 "data_offset": 2048, 00:28:14.215 "data_size": 63488 00:28:14.215 } 00:28:14.215 ] 00:28:14.215 }' 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.215 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.784 [2024-11-26 17:24:44.653748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.784 [2024-11-26 17:24:44.653826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.784 [2024-11-26 17:24:44.653853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:14.784 [2024-11-26 17:24:44.653866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.784 [2024-11-26 17:24:44.654394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.784 [2024-11-26 17:24:44.654419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.784 [2024-11-26 17:24:44.654528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:14.784 [2024-11-26 17:24:44.654555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.784 pt2 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.784 [2024-11-26 17:24:44.665667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:14.784 [2024-11-26 17:24:44.665723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.784 [2024-11-26 17:24:44.665747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:14.784 [2024-11-26 17:24:44.665757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.784 [2024-11-26 17:24:44.666173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.784 [2024-11-26 17:24:44.666191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:14.784 [2024-11-26 17:24:44.666259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:14.784 [2024-11-26 17:24:44.666279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:14.784 pt3 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.784 [2024-11-26 17:24:44.677665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:14.784 [2024-11-26 17:24:44.677715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.784 [2024-11-26 17:24:44.677737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:14.784 [2024-11-26 17:24:44.677749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.784 [2024-11-26 17:24:44.678169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.784 [2024-11-26 17:24:44.678188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:14.784 [2024-11-26 17:24:44.678255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:14.784 [2024-11-26 17:24:44.678294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:14.784 [2024-11-26 17:24:44.678433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:14.784 [2024-11-26 17:24:44.678444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:14.784 [2024-11-26 17:24:44.678729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:14.784 [2024-11-26 17:24:44.678884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:14.784 [2024-11-26 17:24:44.678916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:14.784 [2024-11-26 17:24:44.679074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.784 pt4 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:14.784 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.785 "name": "raid_bdev1", 00:28:14.785 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:14.785 "strip_size_kb": 0, 00:28:14.785 "state": "online", 00:28:14.785 "raid_level": "raid1", 00:28:14.785 "superblock": true, 00:28:14.785 "num_base_bdevs": 4, 00:28:14.785 "num_base_bdevs_discovered": 4, 00:28:14.785 "num_base_bdevs_operational": 4, 00:28:14.785 "base_bdevs_list": [ 00:28:14.785 { 00:28:14.785 "name": "pt1", 00:28:14.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:14.785 "is_configured": true, 00:28:14.785 "data_offset": 2048, 00:28:14.785 "data_size": 63488 00:28:14.785 }, 00:28:14.785 { 00:28:14.785 "name": "pt2", 00:28:14.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.785 "is_configured": true, 00:28:14.785 "data_offset": 2048, 00:28:14.785 "data_size": 63488 00:28:14.785 }, 00:28:14.785 { 00:28:14.785 "name": "pt3", 00:28:14.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:14.785 "is_configured": true, 00:28:14.785 "data_offset": 2048, 00:28:14.785 "data_size": 63488 00:28:14.785 }, 00:28:14.785 { 00:28:14.785 "name": "pt4", 00:28:14.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:14.785 "is_configured": true, 00:28:14.785 "data_offset": 2048, 00:28:14.785 "data_size": 63488 00:28:14.785 } 00:28:14.785 ] 00:28:14.785 }' 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.785 17:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.044 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:15.045 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.045 [2024-11-26 17:24:45.146112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.304 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.304 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.304 "name": "raid_bdev1", 00:28:15.304 "aliases": [ 00:28:15.304 "5582ad93-d835-4c06-8fa2-07c760a87c88" 00:28:15.304 ], 00:28:15.304 "product_name": "Raid Volume", 00:28:15.304 "block_size": 512, 00:28:15.304 "num_blocks": 63488, 00:28:15.304 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:15.304 "assigned_rate_limits": { 00:28:15.304 "rw_ios_per_sec": 0, 00:28:15.304 "rw_mbytes_per_sec": 0, 00:28:15.304 "r_mbytes_per_sec": 0, 00:28:15.304 "w_mbytes_per_sec": 0 00:28:15.304 }, 00:28:15.304 "claimed": false, 00:28:15.304 "zoned": false, 00:28:15.304 "supported_io_types": { 00:28:15.304 "read": true, 00:28:15.304 "write": true, 00:28:15.304 "unmap": false, 00:28:15.304 "flush": false, 00:28:15.304 "reset": true, 00:28:15.304 "nvme_admin": false, 00:28:15.304 "nvme_io": false, 00:28:15.304 "nvme_io_md": false, 00:28:15.304 "write_zeroes": true, 00:28:15.304 "zcopy": false, 00:28:15.304 "get_zone_info": false, 00:28:15.304 "zone_management": false, 00:28:15.304 "zone_append": false, 00:28:15.304 "compare": false, 00:28:15.304 "compare_and_write": false, 00:28:15.304 "abort": false, 00:28:15.304 "seek_hole": false, 00:28:15.304 "seek_data": false, 00:28:15.304 "copy": false, 00:28:15.304 "nvme_iov_md": false 00:28:15.304 }, 00:28:15.304 "memory_domains": [ 00:28:15.304 { 00:28:15.304 "dma_device_id": "system", 00:28:15.304 "dma_device_type": 1 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.304 "dma_device_type": 2 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "system", 00:28:15.304 "dma_device_type": 1 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.304 "dma_device_type": 2 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "system", 00:28:15.304 "dma_device_type": 1 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.304 "dma_device_type": 2 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "system", 00:28:15.304 "dma_device_type": 1 00:28:15.304 }, 00:28:15.304 { 00:28:15.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.304 "dma_device_type": 2 00:28:15.304 } 00:28:15.304 ], 00:28:15.304 "driver_specific": { 00:28:15.304 "raid": { 00:28:15.304 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:15.304 "strip_size_kb": 0, 00:28:15.304 "state": "online", 00:28:15.304 "raid_level": "raid1", 00:28:15.304 "superblock": true, 00:28:15.304 "num_base_bdevs": 4, 00:28:15.304 "num_base_bdevs_discovered": 4, 00:28:15.304 "num_base_bdevs_operational": 4, 00:28:15.304 "base_bdevs_list": [ 00:28:15.305 { 00:28:15.305 "name": "pt1", 00:28:15.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.305 "is_configured": true, 00:28:15.305 "data_offset": 2048, 00:28:15.305 "data_size": 63488 00:28:15.305 }, 00:28:15.305 { 00:28:15.305 "name": "pt2", 00:28:15.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.305 "is_configured": true, 00:28:15.305 "data_offset": 2048, 00:28:15.305 "data_size": 63488 00:28:15.305 }, 00:28:15.305 { 00:28:15.305 "name": "pt3", 00:28:15.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:15.305 "is_configured": true, 00:28:15.305 "data_offset": 2048, 00:28:15.305 "data_size": 63488 00:28:15.305 }, 00:28:15.305 { 00:28:15.305 "name": "pt4", 00:28:15.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:15.305 "is_configured": true, 00:28:15.305 "data_offset": 2048, 00:28:15.305 "data_size": 63488 00:28:15.305 } 00:28:15.305 ] 00:28:15.305 } 00:28:15.305 } 00:28:15.305 }' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:15.305 pt2 00:28:15.305 pt3 00:28:15.305 pt4' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.305 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.564 [2024-11-26 17:24:45.486080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5582ad93-d835-4c06-8fa2-07c760a87c88 '!=' 5582ad93-d835-4c06-8fa2-07c760a87c88 ']' 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:28:15.564 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.565 [2024-11-26 17:24:45.529760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.565 "name": "raid_bdev1", 00:28:15.565 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:15.565 "strip_size_kb": 0, 00:28:15.565 "state": "online", 00:28:15.565 "raid_level": "raid1", 00:28:15.565 "superblock": true, 00:28:15.565 "num_base_bdevs": 4, 00:28:15.565 "num_base_bdevs_discovered": 3, 00:28:15.565 "num_base_bdevs_operational": 3, 00:28:15.565 "base_bdevs_list": [ 00:28:15.565 { 00:28:15.565 "name": null, 00:28:15.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.565 "is_configured": false, 00:28:15.565 "data_offset": 0, 00:28:15.565 "data_size": 63488 00:28:15.565 }, 00:28:15.565 { 00:28:15.565 "name": "pt2", 00:28:15.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.565 "is_configured": true, 00:28:15.565 "data_offset": 2048, 00:28:15.565 "data_size": 63488 00:28:15.565 }, 00:28:15.565 { 00:28:15.565 "name": "pt3", 00:28:15.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:15.565 "is_configured": true, 00:28:15.565 "data_offset": 2048, 00:28:15.565 "data_size": 63488 00:28:15.565 }, 00:28:15.565 { 00:28:15.565 "name": "pt4", 00:28:15.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:15.565 "is_configured": true, 00:28:15.565 "data_offset": 2048, 00:28:15.565 "data_size": 63488 00:28:15.565 } 00:28:15.565 ] 00:28:15.565 }' 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.565 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 [2024-11-26 17:24:45.949715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:16.190 [2024-11-26 17:24:45.949755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:16.190 [2024-11-26 17:24:45.949857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:16.190 [2024-11-26 17:24:45.949961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:16.190 [2024-11-26 17:24:45.949973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 [2024-11-26 17:24:46.045680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:16.190 [2024-11-26 17:24:46.045750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.190 [2024-11-26 17:24:46.045773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:16.190 [2024-11-26 17:24:46.045786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.190 [2024-11-26 17:24:46.048493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.190 [2024-11-26 17:24:46.048546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:16.190 [2024-11-26 17:24:46.048646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:16.190 [2024-11-26 17:24:46.048718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.190 pt2 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.190 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.190 "name": "raid_bdev1", 00:28:16.190 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:16.190 "strip_size_kb": 0, 00:28:16.190 "state": "configuring", 00:28:16.190 "raid_level": "raid1", 00:28:16.190 "superblock": true, 00:28:16.190 "num_base_bdevs": 4, 00:28:16.190 "num_base_bdevs_discovered": 1, 00:28:16.191 "num_base_bdevs_operational": 3, 00:28:16.191 "base_bdevs_list": [ 00:28:16.191 { 00:28:16.191 "name": null, 00:28:16.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.191 "is_configured": false, 00:28:16.191 "data_offset": 2048, 00:28:16.191 "data_size": 63488 00:28:16.191 }, 00:28:16.191 { 00:28:16.191 "name": "pt2", 00:28:16.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.191 "is_configured": true, 00:28:16.191 "data_offset": 2048, 00:28:16.191 "data_size": 63488 00:28:16.191 }, 00:28:16.191 { 00:28:16.191 "name": null, 00:28:16.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:16.191 "is_configured": false, 00:28:16.191 "data_offset": 2048, 00:28:16.191 "data_size": 63488 00:28:16.191 }, 00:28:16.191 { 00:28:16.191 "name": null, 00:28:16.191 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:16.191 "is_configured": false, 00:28:16.191 "data_offset": 2048, 00:28:16.191 "data_size": 63488 00:28:16.191 } 00:28:16.191 ] 00:28:16.191 }' 00:28:16.191 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.191 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.449 [2024-11-26 17:24:46.521725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:16.449 [2024-11-26 17:24:46.521827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.449 [2024-11-26 17:24:46.521858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:16.449 [2024-11-26 17:24:46.521870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.449 [2024-11-26 17:24:46.522403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.449 [2024-11-26 17:24:46.522423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:16.449 [2024-11-26 17:24:46.522546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:16.449 [2024-11-26 17:24:46.522575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:16.449 pt3 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.449 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.708 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.708 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.708 "name": "raid_bdev1", 00:28:16.708 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:16.708 "strip_size_kb": 0, 00:28:16.708 "state": "configuring", 00:28:16.708 "raid_level": "raid1", 00:28:16.708 "superblock": true, 00:28:16.708 "num_base_bdevs": 4, 00:28:16.708 "num_base_bdevs_discovered": 2, 00:28:16.708 "num_base_bdevs_operational": 3, 00:28:16.708 "base_bdevs_list": [ 00:28:16.708 { 00:28:16.708 "name": null, 00:28:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.708 "is_configured": false, 00:28:16.708 "data_offset": 2048, 00:28:16.708 "data_size": 63488 00:28:16.708 }, 00:28:16.708 { 00:28:16.708 "name": "pt2", 00:28:16.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.708 "is_configured": true, 00:28:16.708 "data_offset": 2048, 00:28:16.708 "data_size": 63488 00:28:16.708 }, 00:28:16.708 { 00:28:16.708 "name": "pt3", 00:28:16.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:16.708 "is_configured": true, 00:28:16.708 "data_offset": 2048, 00:28:16.708 "data_size": 63488 00:28:16.708 }, 00:28:16.708 { 00:28:16.708 "name": null, 00:28:16.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:16.708 "is_configured": false, 00:28:16.708 "data_offset": 2048, 00:28:16.708 "data_size": 63488 00:28:16.708 } 00:28:16.708 ] 00:28:16.708 }' 00:28:16.708 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.708 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.967 [2024-11-26 17:24:46.945752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:16.967 [2024-11-26 17:24:46.945847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.967 [2024-11-26 17:24:46.945883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:16.967 [2024-11-26 17:24:46.945896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.967 [2024-11-26 17:24:46.946418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.967 [2024-11-26 17:24:46.946444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:16.967 [2024-11-26 17:24:46.946573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:16.967 [2024-11-26 17:24:46.946605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:16.967 [2024-11-26 17:24:46.946759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:16.967 [2024-11-26 17:24:46.946770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:16.967 [2024-11-26 17:24:46.947050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:16.967 [2024-11-26 17:24:46.947219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:16.967 [2024-11-26 17:24:46.947234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:16.967 [2024-11-26 17:24:46.947386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.967 pt4 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.967 "name": "raid_bdev1", 00:28:16.967 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:16.967 "strip_size_kb": 0, 00:28:16.967 "state": "online", 00:28:16.967 "raid_level": "raid1", 00:28:16.967 "superblock": true, 00:28:16.967 "num_base_bdevs": 4, 00:28:16.967 "num_base_bdevs_discovered": 3, 00:28:16.967 "num_base_bdevs_operational": 3, 00:28:16.967 "base_bdevs_list": [ 00:28:16.967 { 00:28:16.967 "name": null, 00:28:16.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.967 "is_configured": false, 00:28:16.967 "data_offset": 2048, 00:28:16.967 "data_size": 63488 00:28:16.967 }, 00:28:16.967 { 00:28:16.967 "name": "pt2", 00:28:16.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.967 "is_configured": true, 00:28:16.967 "data_offset": 2048, 00:28:16.967 "data_size": 63488 00:28:16.967 }, 00:28:16.967 { 00:28:16.967 "name": "pt3", 00:28:16.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:16.967 "is_configured": true, 00:28:16.967 "data_offset": 2048, 00:28:16.967 "data_size": 63488 00:28:16.967 }, 00:28:16.967 { 00:28:16.967 "name": "pt4", 00:28:16.967 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:16.967 "is_configured": true, 00:28:16.967 "data_offset": 2048, 00:28:16.967 "data_size": 63488 00:28:16.967 } 00:28:16.967 ] 00:28:16.967 }' 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.967 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.536 [2024-11-26 17:24:47.349505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.536 [2024-11-26 17:24:47.349700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.536 [2024-11-26 17:24:47.349901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.536 [2024-11-26 17:24:47.350078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.536 [2024-11-26 17:24:47.350197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:17.536 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.537 [2024-11-26 17:24:47.417410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:17.537 [2024-11-26 17:24:47.417642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.537 [2024-11-26 17:24:47.417676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:17.537 [2024-11-26 17:24:47.417696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.537 [2024-11-26 17:24:47.420438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.537 [2024-11-26 17:24:47.420486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:17.537 [2024-11-26 17:24:47.420608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:17.537 [2024-11-26 17:24:47.420664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:17.537 [2024-11-26 17:24:47.420813] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:17.537 [2024-11-26 17:24:47.420830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.537 [2024-11-26 17:24:47.420848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:28:17.537 [2024-11-26 17:24:47.420915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:17.537 [2024-11-26 17:24:47.421017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:17.537 pt1 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.537 "name": "raid_bdev1", 00:28:17.537 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:17.537 "strip_size_kb": 0, 00:28:17.537 "state": "configuring", 00:28:17.537 "raid_level": "raid1", 00:28:17.537 "superblock": true, 00:28:17.537 "num_base_bdevs": 4, 00:28:17.537 "num_base_bdevs_discovered": 2, 00:28:17.537 "num_base_bdevs_operational": 3, 00:28:17.537 "base_bdevs_list": [ 00:28:17.537 { 00:28:17.537 "name": null, 00:28:17.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.537 "is_configured": false, 00:28:17.537 "data_offset": 2048, 00:28:17.537 "data_size": 63488 00:28:17.537 }, 00:28:17.537 { 00:28:17.537 "name": "pt2", 00:28:17.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.537 "is_configured": true, 00:28:17.537 "data_offset": 2048, 00:28:17.537 "data_size": 63488 00:28:17.537 }, 00:28:17.537 { 00:28:17.537 "name": "pt3", 00:28:17.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:17.537 "is_configured": true, 00:28:17.537 "data_offset": 2048, 00:28:17.537 "data_size": 63488 00:28:17.537 }, 00:28:17.537 { 00:28:17.537 "name": null, 00:28:17.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:17.537 "is_configured": false, 00:28:17.537 "data_offset": 2048, 00:28:17.537 "data_size": 63488 00:28:17.537 } 00:28:17.537 ] 00:28:17.537 }' 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.537 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.798 [2024-11-26 17:24:47.856792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:17.798 [2024-11-26 17:24:47.856870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.798 [2024-11-26 17:24:47.856901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:17.798 [2024-11-26 17:24:47.856914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.798 [2024-11-26 17:24:47.857421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.798 [2024-11-26 17:24:47.857440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:17.798 [2024-11-26 17:24:47.857578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:17.798 [2024-11-26 17:24:47.857621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:17.798 [2024-11-26 17:24:47.857785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:28:17.798 [2024-11-26 17:24:47.857796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:17.798 [2024-11-26 17:24:47.858088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:17.798 [2024-11-26 17:24:47.858245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:28:17.798 [2024-11-26 17:24:47.858258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:28:17.798 [2024-11-26 17:24:47.858407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:17.798 pt4 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.798 "name": "raid_bdev1", 00:28:17.798 "uuid": "5582ad93-d835-4c06-8fa2-07c760a87c88", 00:28:17.798 "strip_size_kb": 0, 00:28:17.798 "state": "online", 00:28:17.798 "raid_level": "raid1", 00:28:17.798 "superblock": true, 00:28:17.798 "num_base_bdevs": 4, 00:28:17.798 "num_base_bdevs_discovered": 3, 00:28:17.798 "num_base_bdevs_operational": 3, 00:28:17.798 "base_bdevs_list": [ 00:28:17.798 { 00:28:17.798 "name": null, 00:28:17.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.798 "is_configured": false, 00:28:17.798 "data_offset": 2048, 00:28:17.798 "data_size": 63488 00:28:17.798 }, 00:28:17.798 { 00:28:17.798 "name": "pt2", 00:28:17.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.798 "is_configured": true, 00:28:17.798 "data_offset": 2048, 00:28:17.798 "data_size": 63488 00:28:17.798 }, 00:28:17.798 { 00:28:17.798 "name": "pt3", 00:28:17.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:17.798 "is_configured": true, 00:28:17.798 "data_offset": 2048, 00:28:17.798 "data_size": 63488 00:28:17.798 }, 00:28:17.798 { 00:28:17.798 "name": "pt4", 00:28:17.798 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:17.798 "is_configured": true, 00:28:17.798 "data_offset": 2048, 00:28:17.798 "data_size": 63488 00:28:17.798 } 00:28:17.798 ] 00:28:17.798 }' 00:28:17.798 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.057 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.317 [2024-11-26 17:24:48.316705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5582ad93-d835-4c06-8fa2-07c760a87c88 '!=' 5582ad93-d835-4c06-8fa2-07c760a87c88 ']' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74627 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74627 ']' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74627 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74627 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:18.317 killing process with pid 74627 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74627' 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74627 00:28:18.317 [2024-11-26 17:24:48.390236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:18.317 [2024-11-26 17:24:48.390355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:18.317 [2024-11-26 17:24:48.390441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:18.317 [2024-11-26 17:24:48.390456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:28:18.317 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74627 00:28:18.885 [2024-11-26 17:24:48.809163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:20.260 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:20.260 ************************************ 00:28:20.260 END TEST raid_superblock_test 00:28:20.260 ************************************ 00:28:20.260 00:28:20.260 real 0m8.456s 00:28:20.260 user 0m13.156s 00:28:20.260 sys 0m1.794s 00:28:20.260 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.260 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.260 17:24:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:28:20.260 17:24:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:20.260 17:24:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.260 17:24:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:20.260 ************************************ 00:28:20.260 START TEST raid_read_error_test 00:28:20.260 ************************************ 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VKdYyYl1r2 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75115 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75115 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75115 ']' 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.260 17:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.260 [2024-11-26 17:24:50.183860] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:20.260 [2024-11-26 17:24:50.184005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75115 ] 00:28:20.260 [2024-11-26 17:24:50.361449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.532 [2024-11-26 17:24:50.498700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.790 [2024-11-26 17:24:50.710663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:20.790 [2024-11-26 17:24:50.710959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 BaseBdev1_malloc 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 true 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.048 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 [2024-11-26 17:24:51.098283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:21.049 [2024-11-26 17:24:51.098505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.049 [2024-11-26 17:24:51.098551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:21.049 [2024-11-26 17:24:51.098568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.049 [2024-11-26 17:24:51.101251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.049 [2024-11-26 17:24:51.101299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:21.049 BaseBdev1 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.049 BaseBdev2_malloc 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.049 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 true 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 [2024-11-26 17:24:51.171676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:21.308 [2024-11-26 17:24:51.171745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.308 [2024-11-26 17:24:51.171765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:21.308 [2024-11-26 17:24:51.171779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.308 [2024-11-26 17:24:51.174341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.308 [2024-11-26 17:24:51.174388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:21.308 BaseBdev2 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 BaseBdev3_malloc 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 true 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 [2024-11-26 17:24:51.251438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:21.308 [2024-11-26 17:24:51.251506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.308 [2024-11-26 17:24:51.251584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:21.308 [2024-11-26 17:24:51.251602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.308 [2024-11-26 17:24:51.254177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.308 [2024-11-26 17:24:51.254355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:21.308 BaseBdev3 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.308 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.308 BaseBdev4_malloc 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 true 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 [2024-11-26 17:24:51.322100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:21.309 [2024-11-26 17:24:51.322169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.309 [2024-11-26 17:24:51.322191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:21.309 [2024-11-26 17:24:51.322205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.309 [2024-11-26 17:24:51.324791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.309 [2024-11-26 17:24:51.324963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:21.309 BaseBdev4 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 [2024-11-26 17:24:51.334142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:21.309 [2024-11-26 17:24:51.336391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:21.309 [2024-11-26 17:24:51.336599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:21.309 [2024-11-26 17:24:51.336678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:21.309 [2024-11-26 17:24:51.336923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:21.309 [2024-11-26 17:24:51.336939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:21.309 [2024-11-26 17:24:51.337207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:21.309 [2024-11-26 17:24:51.337380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:21.309 [2024-11-26 17:24:51.337391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:21.309 [2024-11-26 17:24:51.337584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.309 "name": "raid_bdev1", 00:28:21.309 "uuid": "f133b116-2b6a-4bce-a06f-2b8e3990ce56", 00:28:21.309 "strip_size_kb": 0, 00:28:21.309 "state": "online", 00:28:21.309 "raid_level": "raid1", 00:28:21.309 "superblock": true, 00:28:21.309 "num_base_bdevs": 4, 00:28:21.309 "num_base_bdevs_discovered": 4, 00:28:21.309 "num_base_bdevs_operational": 4, 00:28:21.309 "base_bdevs_list": [ 00:28:21.309 { 00:28:21.309 "name": "BaseBdev1", 00:28:21.309 "uuid": "20414785-faa9-51df-ac42-b5b812660e06", 00:28:21.309 "is_configured": true, 00:28:21.309 "data_offset": 2048, 00:28:21.309 "data_size": 63488 00:28:21.309 }, 00:28:21.309 { 00:28:21.309 "name": "BaseBdev2", 00:28:21.309 "uuid": "34b74a8f-ca3b-5082-abf7-3064ac652045", 00:28:21.309 "is_configured": true, 00:28:21.309 "data_offset": 2048, 00:28:21.309 "data_size": 63488 00:28:21.309 }, 00:28:21.309 { 00:28:21.309 "name": "BaseBdev3", 00:28:21.309 "uuid": "10641dbe-75f9-555c-82ca-8139a47bf435", 00:28:21.309 "is_configured": true, 00:28:21.309 "data_offset": 2048, 00:28:21.309 "data_size": 63488 00:28:21.309 }, 00:28:21.309 { 00:28:21.309 "name": "BaseBdev4", 00:28:21.309 "uuid": "daf1020d-ba67-58fd-a0f2-850286464282", 00:28:21.309 "is_configured": true, 00:28:21.309 "data_offset": 2048, 00:28:21.309 "data_size": 63488 00:28:21.309 } 00:28:21.309 ] 00:28:21.309 }' 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.309 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.876 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:21.876 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:21.876 [2024-11-26 17:24:51.827072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.811 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:22.811 "name": "raid_bdev1", 00:28:22.811 "uuid": "f133b116-2b6a-4bce-a06f-2b8e3990ce56", 00:28:22.811 "strip_size_kb": 0, 00:28:22.811 "state": "online", 00:28:22.811 "raid_level": "raid1", 00:28:22.811 "superblock": true, 00:28:22.811 "num_base_bdevs": 4, 00:28:22.811 "num_base_bdevs_discovered": 4, 00:28:22.811 "num_base_bdevs_operational": 4, 00:28:22.811 "base_bdevs_list": [ 00:28:22.811 { 00:28:22.811 "name": "BaseBdev1", 00:28:22.811 "uuid": "20414785-faa9-51df-ac42-b5b812660e06", 00:28:22.811 "is_configured": true, 00:28:22.811 "data_offset": 2048, 00:28:22.811 "data_size": 63488 00:28:22.811 }, 00:28:22.811 { 00:28:22.811 "name": "BaseBdev2", 00:28:22.811 "uuid": "34b74a8f-ca3b-5082-abf7-3064ac652045", 00:28:22.812 "is_configured": true, 00:28:22.812 "data_offset": 2048, 00:28:22.812 "data_size": 63488 00:28:22.812 }, 00:28:22.812 { 00:28:22.812 "name": "BaseBdev3", 00:28:22.812 "uuid": "10641dbe-75f9-555c-82ca-8139a47bf435", 00:28:22.812 "is_configured": true, 00:28:22.812 "data_offset": 2048, 00:28:22.812 "data_size": 63488 00:28:22.812 }, 00:28:22.812 { 00:28:22.812 "name": "BaseBdev4", 00:28:22.812 "uuid": "daf1020d-ba67-58fd-a0f2-850286464282", 00:28:22.812 "is_configured": true, 00:28:22.812 "data_offset": 2048, 00:28:22.812 "data_size": 63488 00:28:22.812 } 00:28:22.812 ] 00:28:22.812 }' 00:28:22.812 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:22.812 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.070 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:23.070 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.070 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.070 [2024-11-26 17:24:53.179920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:23.070 [2024-11-26 17:24:53.179968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.070 [2024-11-26 17:24:53.183157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.328 [2024-11-26 17:24:53.183434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.328 [2024-11-26 17:24:53.183624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.328 [2024-11-26 17:24:53.183647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:23.328 { 00:28:23.328 "results": [ 00:28:23.328 { 00:28:23.328 "job": "raid_bdev1", 00:28:23.328 "core_mask": "0x1", 00:28:23.328 "workload": "randrw", 00:28:23.328 "percentage": 50, 00:28:23.328 "status": "finished", 00:28:23.328 "queue_depth": 1, 00:28:23.328 "io_size": 131072, 00:28:23.328 "runtime": 1.352563, 00:28:23.328 "iops": 9794.737842155966, 00:28:23.329 "mibps": 1224.3422302694958, 00:28:23.329 "io_failed": 0, 00:28:23.329 "io_timeout": 0, 00:28:23.329 "avg_latency_us": 99.38868905573986, 00:28:23.329 "min_latency_us": 23.955020080321287, 00:28:23.329 "max_latency_us": 1546.2811244979919 00:28:23.329 } 00:28:23.329 ], 00:28:23.329 "core_count": 1 00:28:23.329 } 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75115 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75115 ']' 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75115 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75115 00:28:23.329 killing process with pid 75115 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75115' 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75115 00:28:23.329 [2024-11-26 17:24:53.237740] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:23.329 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75115 00:28:23.587 [2024-11-26 17:24:53.588509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VKdYyYl1r2 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:24.963 ************************************ 00:28:24.963 END TEST raid_read_error_test 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:24.963 00:28:24.963 real 0m4.815s 00:28:24.963 user 0m5.559s 00:28:24.963 sys 0m0.654s 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.963 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.963 ************************************ 00:28:24.963 17:24:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:28:24.963 17:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:24.963 17:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.963 17:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:24.963 ************************************ 00:28:24.963 START TEST raid_write_error_test 00:28:24.963 ************************************ 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.scRMYerWRS 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75260 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75260 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75260 ']' 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.963 17:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 [2024-11-26 17:24:55.086574] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:25.266 [2024-11-26 17:24:55.086928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75260 ] 00:28:25.266 [2024-11-26 17:24:55.255611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.525 [2024-11-26 17:24:55.403622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.525 [2024-11-26 17:24:55.633635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:25.525 [2024-11-26 17:24:55.633937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.092 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.092 BaseBdev1_malloc 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.092 true 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.092 [2024-11-26 17:24:56.057006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:26.092 [2024-11-26 17:24:56.057121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.092 [2024-11-26 17:24:56.057159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:26.092 [2024-11-26 17:24:56.057181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.092 [2024-11-26 17:24:56.060224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.092 [2024-11-26 17:24:56.060445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:26.092 BaseBdev1 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.092 BaseBdev2_malloc 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:26.092 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 true 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 [2024-11-26 17:24:56.126189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:26.093 [2024-11-26 17:24:56.126410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.093 [2024-11-26 17:24:56.126445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:26.093 [2024-11-26 17:24:56.126462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.093 [2024-11-26 17:24:56.129238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.093 [2024-11-26 17:24:56.129286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:26.093 BaseBdev2 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 BaseBdev3_malloc 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 true 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.093 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 [2024-11-26 17:24:56.208634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:26.352 [2024-11-26 17:24:56.208858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.352 [2024-11-26 17:24:56.208894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:26.352 [2024-11-26 17:24:56.208912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.352 [2024-11-26 17:24:56.211799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.352 [2024-11-26 17:24:56.211843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:26.352 BaseBdev3 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 BaseBdev4_malloc 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 true 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 [2024-11-26 17:24:56.280604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:26.352 [2024-11-26 17:24:56.280679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.352 [2024-11-26 17:24:56.280708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:26.352 [2024-11-26 17:24:56.280724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.352 [2024-11-26 17:24:56.283604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.352 [2024-11-26 17:24:56.283657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:26.352 BaseBdev4 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 [2024-11-26 17:24:56.292705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:26.352 [2024-11-26 17:24:56.295407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:26.352 [2024-11-26 17:24:56.295665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:26.352 [2024-11-26 17:24:56.295785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:26.352 [2024-11-26 17:24:56.296161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:26.352 [2024-11-26 17:24:56.296273] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:26.352 [2024-11-26 17:24:56.296667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:28:26.352 [2024-11-26 17:24:56.296973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:26.352 [2024-11-26 17:24:56.297067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:26.352 [2024-11-26 17:24:56.297435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.352 "name": "raid_bdev1", 00:28:26.352 "uuid": "6b3a22a1-7661-41a1-b71e-6afd622c3ce6", 00:28:26.352 "strip_size_kb": 0, 00:28:26.352 "state": "online", 00:28:26.352 "raid_level": "raid1", 00:28:26.352 "superblock": true, 00:28:26.352 "num_base_bdevs": 4, 00:28:26.352 "num_base_bdevs_discovered": 4, 00:28:26.352 "num_base_bdevs_operational": 4, 00:28:26.352 "base_bdevs_list": [ 00:28:26.352 { 00:28:26.352 "name": "BaseBdev1", 00:28:26.352 "uuid": "98591673-92f4-5a7d-8f76-d348db495677", 00:28:26.352 "is_configured": true, 00:28:26.352 "data_offset": 2048, 00:28:26.352 "data_size": 63488 00:28:26.352 }, 00:28:26.352 { 00:28:26.352 "name": "BaseBdev2", 00:28:26.352 "uuid": "90dc0f91-937f-5e8b-90c8-c0888f6b6b20", 00:28:26.352 "is_configured": true, 00:28:26.352 "data_offset": 2048, 00:28:26.352 "data_size": 63488 00:28:26.352 }, 00:28:26.352 { 00:28:26.352 "name": "BaseBdev3", 00:28:26.352 "uuid": "26d5d464-f362-5739-a34a-1e48204a8a4d", 00:28:26.352 "is_configured": true, 00:28:26.352 "data_offset": 2048, 00:28:26.352 "data_size": 63488 00:28:26.352 }, 00:28:26.352 { 00:28:26.352 "name": "BaseBdev4", 00:28:26.352 "uuid": "5ed04fdd-08dc-5638-8720-7ab7e084d2d0", 00:28:26.352 "is_configured": true, 00:28:26.352 "data_offset": 2048, 00:28:26.352 "data_size": 63488 00:28:26.352 } 00:28:26.352 ] 00:28:26.352 }' 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.352 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.919 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:26.919 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:26.919 [2024-11-26 17:24:56.886325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.854 [2024-11-26 17:24:57.760574] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:28:27.854 [2024-11-26 17:24:57.760656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:27.854 [2024-11-26 17:24:57.760920] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.854 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.855 "name": "raid_bdev1", 00:28:27.855 "uuid": "6b3a22a1-7661-41a1-b71e-6afd622c3ce6", 00:28:27.855 "strip_size_kb": 0, 00:28:27.855 "state": "online", 00:28:27.855 "raid_level": "raid1", 00:28:27.855 "superblock": true, 00:28:27.855 "num_base_bdevs": 4, 00:28:27.855 "num_base_bdevs_discovered": 3, 00:28:27.855 "num_base_bdevs_operational": 3, 00:28:27.855 "base_bdevs_list": [ 00:28:27.855 { 00:28:27.855 "name": null, 00:28:27.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.855 "is_configured": false, 00:28:27.855 "data_offset": 0, 00:28:27.855 "data_size": 63488 00:28:27.855 }, 00:28:27.855 { 00:28:27.855 "name": "BaseBdev2", 00:28:27.855 "uuid": "90dc0f91-937f-5e8b-90c8-c0888f6b6b20", 00:28:27.855 "is_configured": true, 00:28:27.855 "data_offset": 2048, 00:28:27.855 "data_size": 63488 00:28:27.855 }, 00:28:27.855 { 00:28:27.855 "name": "BaseBdev3", 00:28:27.855 "uuid": "26d5d464-f362-5739-a34a-1e48204a8a4d", 00:28:27.855 "is_configured": true, 00:28:27.855 "data_offset": 2048, 00:28:27.855 "data_size": 63488 00:28:27.855 }, 00:28:27.855 { 00:28:27.855 "name": "BaseBdev4", 00:28:27.855 "uuid": "5ed04fdd-08dc-5638-8720-7ab7e084d2d0", 00:28:27.855 "is_configured": true, 00:28:27.855 "data_offset": 2048, 00:28:27.855 "data_size": 63488 00:28:27.855 } 00:28:27.855 ] 00:28:27.855 }' 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.855 17:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.113 [2024-11-26 17:24:58.204372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:28.113 [2024-11-26 17:24:58.204410] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:28.113 [2024-11-26 17:24:58.207131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.113 [2024-11-26 17:24:58.207343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.113 [2024-11-26 17:24:58.207481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.113 [2024-11-26 17:24:58.207495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:28.113 { 00:28:28.113 "results": [ 00:28:28.113 { 00:28:28.113 "job": "raid_bdev1", 00:28:28.113 "core_mask": "0x1", 00:28:28.113 "workload": "randrw", 00:28:28.113 "percentage": 50, 00:28:28.113 "status": "finished", 00:28:28.113 "queue_depth": 1, 00:28:28.113 "io_size": 131072, 00:28:28.113 "runtime": 1.317846, 00:28:28.113 "iops": 10317.594013261034, 00:28:28.113 "mibps": 1289.6992516576292, 00:28:28.113 "io_failed": 0, 00:28:28.113 "io_timeout": 0, 00:28:28.113 "avg_latency_us": 94.2293490797787, 00:28:28.113 "min_latency_us": 24.469076305220884, 00:28:28.113 "max_latency_us": 1401.5228915662651 00:28:28.113 } 00:28:28.113 ], 00:28:28.113 "core_count": 1 00:28:28.113 } 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75260 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75260 ']' 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75260 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.113 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75260 00:28:28.371 killing process with pid 75260 00:28:28.371 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.371 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.371 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75260' 00:28:28.371 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75260 00:28:28.371 [2024-11-26 17:24:58.246862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:28.371 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75260 00:28:28.629 [2024-11-26 17:24:58.591978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.scRMYerWRS 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:30.002 00:28:30.002 real 0m4.894s 00:28:30.002 user 0m5.760s 00:28:30.002 sys 0m0.695s 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.002 ************************************ 00:28:30.002 END TEST raid_write_error_test 00:28:30.002 ************************************ 00:28:30.002 17:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.002 17:24:59 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:28:30.002 17:24:59 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:28:30.002 17:24:59 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:28:30.002 17:24:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:30.003 17:24:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.003 17:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:30.003 ************************************ 00:28:30.003 START TEST raid_rebuild_test 00:28:30.003 ************************************ 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75405 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75405 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75405 ']' 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.003 17:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.003 [2024-11-26 17:25:00.032590] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:30.003 [2024-11-26 17:25:00.032966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:28:30.003 Zero copy mechanism will not be used. 00:28:30.003 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75405 ] 00:28:30.263 [2024-11-26 17:25:00.215690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.263 [2024-11-26 17:25:00.360290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.554 [2024-11-26 17:25:00.582324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.554 [2024-11-26 17:25:00.582590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.812 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.070 BaseBdev1_malloc 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.070 [2024-11-26 17:25:00.964095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:31.070 [2024-11-26 17:25:00.964175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.070 [2024-11-26 17:25:00.964204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:31.070 [2024-11-26 17:25:00.964221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.070 [2024-11-26 17:25:00.966896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.070 [2024-11-26 17:25:00.966943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:31.070 BaseBdev1 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.070 17:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.070 BaseBdev2_malloc 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.070 [2024-11-26 17:25:01.024352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:31.070 [2024-11-26 17:25:01.024438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.070 [2024-11-26 17:25:01.024470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:31.070 [2024-11-26 17:25:01.024487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.070 [2024-11-26 17:25:01.027151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.070 [2024-11-26 17:25:01.027199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:31.070 BaseBdev2 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.070 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 spare_malloc 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 spare_delay 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 [2024-11-26 17:25:01.107963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:31.071 [2024-11-26 17:25:01.108035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.071 [2024-11-26 17:25:01.108059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:31.071 [2024-11-26 17:25:01.108074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.071 [2024-11-26 17:25:01.110675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.071 [2024-11-26 17:25:01.110851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:31.071 spare 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 [2024-11-26 17:25:01.120000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:31.071 [2024-11-26 17:25:01.122222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:31.071 [2024-11-26 17:25:01.122433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:31.071 [2024-11-26 17:25:01.122457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:31.071 [2024-11-26 17:25:01.122745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:31.071 [2024-11-26 17:25:01.122903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:31.071 [2024-11-26 17:25:01.122917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:31.071 [2024-11-26 17:25:01.123072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:31.071 "name": "raid_bdev1", 00:28:31.071 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:31.071 "strip_size_kb": 0, 00:28:31.071 "state": "online", 00:28:31.071 "raid_level": "raid1", 00:28:31.071 "superblock": false, 00:28:31.071 "num_base_bdevs": 2, 00:28:31.071 "num_base_bdevs_discovered": 2, 00:28:31.071 "num_base_bdevs_operational": 2, 00:28:31.071 "base_bdevs_list": [ 00:28:31.071 { 00:28:31.071 "name": "BaseBdev1", 00:28:31.071 "uuid": "37283857-8f4b-5a2c-be5a-13bf4b5320fd", 00:28:31.071 "is_configured": true, 00:28:31.071 "data_offset": 0, 00:28:31.071 "data_size": 65536 00:28:31.071 }, 00:28:31.071 { 00:28:31.071 "name": "BaseBdev2", 00:28:31.071 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:31.071 "is_configured": true, 00:28:31.071 "data_offset": 0, 00:28:31.071 "data_size": 65536 00:28:31.071 } 00:28:31.071 ] 00:28:31.071 }' 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:31.071 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.640 [2024-11-26 17:25:01.583838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.640 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:31.899 [2024-11-26 17:25:01.871206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:31.899 /dev/nbd0 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.899 1+0 records in 00:28:31.899 1+0 records out 00:28:31.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029927 s, 13.7 MB/s 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.899 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:31.900 17:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:37.181 65536+0 records in 00:28:37.181 65536+0 records out 00:28:37.181 33554432 bytes (34 MB, 32 MiB) copied, 5.15718 s, 6.5 MB/s 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.181 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:37.440 [2024-11-26 17:25:07.325435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.440 [2024-11-26 17:25:07.365511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:37.440 "name": "raid_bdev1", 00:28:37.440 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:37.440 "strip_size_kb": 0, 00:28:37.440 "state": "online", 00:28:37.440 "raid_level": "raid1", 00:28:37.440 "superblock": false, 00:28:37.440 "num_base_bdevs": 2, 00:28:37.440 "num_base_bdevs_discovered": 1, 00:28:37.440 "num_base_bdevs_operational": 1, 00:28:37.440 "base_bdevs_list": [ 00:28:37.440 { 00:28:37.440 "name": null, 00:28:37.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.440 "is_configured": false, 00:28:37.440 "data_offset": 0, 00:28:37.440 "data_size": 65536 00:28:37.440 }, 00:28:37.440 { 00:28:37.440 "name": "BaseBdev2", 00:28:37.440 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:37.440 "is_configured": true, 00:28:37.440 "data_offset": 0, 00:28:37.440 "data_size": 65536 00:28:37.440 } 00:28:37.440 ] 00:28:37.440 }' 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:37.440 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.007 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:38.007 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.007 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.007 [2024-11-26 17:25:07.824904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.007 [2024-11-26 17:25:07.844142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:28:38.007 17:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.007 17:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:38.007 [2024-11-26 17:25:07.846801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.965 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:38.965 "name": "raid_bdev1", 00:28:38.965 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:38.965 "strip_size_kb": 0, 00:28:38.965 "state": "online", 00:28:38.965 "raid_level": "raid1", 00:28:38.965 "superblock": false, 00:28:38.965 "num_base_bdevs": 2, 00:28:38.965 "num_base_bdevs_discovered": 2, 00:28:38.965 "num_base_bdevs_operational": 2, 00:28:38.965 "process": { 00:28:38.965 "type": "rebuild", 00:28:38.965 "target": "spare", 00:28:38.965 "progress": { 00:28:38.965 "blocks": 20480, 00:28:38.965 "percent": 31 00:28:38.965 } 00:28:38.965 }, 00:28:38.965 "base_bdevs_list": [ 00:28:38.965 { 00:28:38.965 "name": "spare", 00:28:38.965 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:38.966 "is_configured": true, 00:28:38.966 "data_offset": 0, 00:28:38.966 "data_size": 65536 00:28:38.966 }, 00:28:38.966 { 00:28:38.966 "name": "BaseBdev2", 00:28:38.966 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:38.966 "is_configured": true, 00:28:38.966 "data_offset": 0, 00:28:38.966 "data_size": 65536 00:28:38.966 } 00:28:38.966 ] 00:28:38.966 }' 00:28:38.966 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:38.966 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.966 17:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:38.966 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.966 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:38.966 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.966 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.966 [2024-11-26 17:25:09.014696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:38.966 [2024-11-26 17:25:09.054192] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:38.966 [2024-11-26 17:25:09.054289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.966 [2024-11-26 17:25:09.054308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:38.966 [2024-11-26 17:25:09.054324] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:39.223 "name": "raid_bdev1", 00:28:39.223 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:39.223 "strip_size_kb": 0, 00:28:39.223 "state": "online", 00:28:39.223 "raid_level": "raid1", 00:28:39.223 "superblock": false, 00:28:39.223 "num_base_bdevs": 2, 00:28:39.223 "num_base_bdevs_discovered": 1, 00:28:39.223 "num_base_bdevs_operational": 1, 00:28:39.223 "base_bdevs_list": [ 00:28:39.223 { 00:28:39.223 "name": null, 00:28:39.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.223 "is_configured": false, 00:28:39.223 "data_offset": 0, 00:28:39.223 "data_size": 65536 00:28:39.223 }, 00:28:39.223 { 00:28:39.223 "name": "BaseBdev2", 00:28:39.223 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:39.223 "is_configured": true, 00:28:39.223 "data_offset": 0, 00:28:39.223 "data_size": 65536 00:28:39.223 } 00:28:39.223 ] 00:28:39.223 }' 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:39.223 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:39.481 "name": "raid_bdev1", 00:28:39.481 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:39.481 "strip_size_kb": 0, 00:28:39.481 "state": "online", 00:28:39.481 "raid_level": "raid1", 00:28:39.481 "superblock": false, 00:28:39.481 "num_base_bdevs": 2, 00:28:39.481 "num_base_bdevs_discovered": 1, 00:28:39.481 "num_base_bdevs_operational": 1, 00:28:39.481 "base_bdevs_list": [ 00:28:39.481 { 00:28:39.481 "name": null, 00:28:39.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.481 "is_configured": false, 00:28:39.481 "data_offset": 0, 00:28:39.481 "data_size": 65536 00:28:39.481 }, 00:28:39.481 { 00:28:39.481 "name": "BaseBdev2", 00:28:39.481 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:39.481 "is_configured": true, 00:28:39.481 "data_offset": 0, 00:28:39.481 "data_size": 65536 00:28:39.481 } 00:28:39.481 ] 00:28:39.481 }' 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:39.481 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:39.739 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:39.739 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:39.740 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:39.740 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.740 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.740 [2024-11-26 17:25:09.630933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:39.740 [2024-11-26 17:25:09.649955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:28:39.740 17:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.740 17:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:39.740 [2024-11-26 17:25:09.652404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:40.709 "name": "raid_bdev1", 00:28:40.709 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:40.709 "strip_size_kb": 0, 00:28:40.709 "state": "online", 00:28:40.709 "raid_level": "raid1", 00:28:40.709 "superblock": false, 00:28:40.709 "num_base_bdevs": 2, 00:28:40.709 "num_base_bdevs_discovered": 2, 00:28:40.709 "num_base_bdevs_operational": 2, 00:28:40.709 "process": { 00:28:40.709 "type": "rebuild", 00:28:40.709 "target": "spare", 00:28:40.709 "progress": { 00:28:40.709 "blocks": 20480, 00:28:40.709 "percent": 31 00:28:40.709 } 00:28:40.709 }, 00:28:40.709 "base_bdevs_list": [ 00:28:40.709 { 00:28:40.709 "name": "spare", 00:28:40.709 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:40.709 "is_configured": true, 00:28:40.709 "data_offset": 0, 00:28:40.709 "data_size": 65536 00:28:40.709 }, 00:28:40.709 { 00:28:40.709 "name": "BaseBdev2", 00:28:40.709 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:40.709 "is_configured": true, 00:28:40.709 "data_offset": 0, 00:28:40.709 "data_size": 65536 00:28:40.709 } 00:28:40.709 ] 00:28:40.709 }' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.709 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.710 17:25:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.710 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:40.710 "name": "raid_bdev1", 00:28:40.710 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:40.710 "strip_size_kb": 0, 00:28:40.710 "state": "online", 00:28:40.710 "raid_level": "raid1", 00:28:40.710 "superblock": false, 00:28:40.710 "num_base_bdevs": 2, 00:28:40.710 "num_base_bdevs_discovered": 2, 00:28:40.710 "num_base_bdevs_operational": 2, 00:28:40.710 "process": { 00:28:40.710 "type": "rebuild", 00:28:40.710 "target": "spare", 00:28:40.710 "progress": { 00:28:40.710 "blocks": 22528, 00:28:40.710 "percent": 34 00:28:40.710 } 00:28:40.710 }, 00:28:40.710 "base_bdevs_list": [ 00:28:40.710 { 00:28:40.710 "name": "spare", 00:28:40.710 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:40.710 "is_configured": true, 00:28:40.710 "data_offset": 0, 00:28:40.710 "data_size": 65536 00:28:40.710 }, 00:28:40.710 { 00:28:40.710 "name": "BaseBdev2", 00:28:40.710 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:40.710 "is_configured": true, 00:28:40.710 "data_offset": 0, 00:28:40.710 "data_size": 65536 00:28:40.710 } 00:28:40.710 ] 00:28:40.710 }' 00:28:40.710 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:40.968 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.968 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:40.968 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.968 17:25:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:41.906 "name": "raid_bdev1", 00:28:41.906 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:41.906 "strip_size_kb": 0, 00:28:41.906 "state": "online", 00:28:41.906 "raid_level": "raid1", 00:28:41.906 "superblock": false, 00:28:41.906 "num_base_bdevs": 2, 00:28:41.906 "num_base_bdevs_discovered": 2, 00:28:41.906 "num_base_bdevs_operational": 2, 00:28:41.906 "process": { 00:28:41.906 "type": "rebuild", 00:28:41.906 "target": "spare", 00:28:41.906 "progress": { 00:28:41.906 "blocks": 45056, 00:28:41.906 "percent": 68 00:28:41.906 } 00:28:41.906 }, 00:28:41.906 "base_bdevs_list": [ 00:28:41.906 { 00:28:41.906 "name": "spare", 00:28:41.906 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:41.906 "is_configured": true, 00:28:41.906 "data_offset": 0, 00:28:41.906 "data_size": 65536 00:28:41.906 }, 00:28:41.906 { 00:28:41.906 "name": "BaseBdev2", 00:28:41.906 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:41.906 "is_configured": true, 00:28:41.906 "data_offset": 0, 00:28:41.906 "data_size": 65536 00:28:41.906 } 00:28:41.906 ] 00:28:41.906 }' 00:28:41.906 17:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:41.906 17:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.906 17:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:42.165 17:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.165 17:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:43.102 [2024-11-26 17:25:12.872508] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:43.102 [2024-11-26 17:25:12.872616] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:43.102 [2024-11-26 17:25:12.872675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.102 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:43.102 "name": "raid_bdev1", 00:28:43.102 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:43.102 "strip_size_kb": 0, 00:28:43.102 "state": "online", 00:28:43.102 "raid_level": "raid1", 00:28:43.102 "superblock": false, 00:28:43.102 "num_base_bdevs": 2, 00:28:43.102 "num_base_bdevs_discovered": 2, 00:28:43.102 "num_base_bdevs_operational": 2, 00:28:43.102 "base_bdevs_list": [ 00:28:43.102 { 00:28:43.102 "name": "spare", 00:28:43.102 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:43.102 "is_configured": true, 00:28:43.102 "data_offset": 0, 00:28:43.102 "data_size": 65536 00:28:43.102 }, 00:28:43.102 { 00:28:43.102 "name": "BaseBdev2", 00:28:43.102 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:43.102 "is_configured": true, 00:28:43.102 "data_offset": 0, 00:28:43.102 "data_size": 65536 00:28:43.102 } 00:28:43.102 ] 00:28:43.103 }' 00:28:43.103 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:43.103 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:43.103 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:43.365 "name": "raid_bdev1", 00:28:43.365 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:43.365 "strip_size_kb": 0, 00:28:43.365 "state": "online", 00:28:43.365 "raid_level": "raid1", 00:28:43.365 "superblock": false, 00:28:43.365 "num_base_bdevs": 2, 00:28:43.365 "num_base_bdevs_discovered": 2, 00:28:43.365 "num_base_bdevs_operational": 2, 00:28:43.365 "base_bdevs_list": [ 00:28:43.365 { 00:28:43.365 "name": "spare", 00:28:43.365 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:43.365 "is_configured": true, 00:28:43.365 "data_offset": 0, 00:28:43.365 "data_size": 65536 00:28:43.365 }, 00:28:43.365 { 00:28:43.365 "name": "BaseBdev2", 00:28:43.365 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:43.365 "is_configured": true, 00:28:43.365 "data_offset": 0, 00:28:43.365 "data_size": 65536 00:28:43.365 } 00:28:43.365 ] 00:28:43.365 }' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.365 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:43.365 "name": "raid_bdev1", 00:28:43.365 "uuid": "6cdffd8a-21a7-4925-ad8d-91fbd665a690", 00:28:43.365 "strip_size_kb": 0, 00:28:43.365 "state": "online", 00:28:43.365 "raid_level": "raid1", 00:28:43.365 "superblock": false, 00:28:43.365 "num_base_bdevs": 2, 00:28:43.366 "num_base_bdevs_discovered": 2, 00:28:43.366 "num_base_bdevs_operational": 2, 00:28:43.366 "base_bdevs_list": [ 00:28:43.366 { 00:28:43.366 "name": "spare", 00:28:43.366 "uuid": "050e1859-e3f3-5aa6-b814-ad7e3bd0bb8c", 00:28:43.366 "is_configured": true, 00:28:43.366 "data_offset": 0, 00:28:43.366 "data_size": 65536 00:28:43.366 }, 00:28:43.366 { 00:28:43.366 "name": "BaseBdev2", 00:28:43.366 "uuid": "0c5e840e-bcce-5a3b-a71f-0296492608c3", 00:28:43.366 "is_configured": true, 00:28:43.366 "data_offset": 0, 00:28:43.366 "data_size": 65536 00:28:43.366 } 00:28:43.366 ] 00:28:43.366 }' 00:28:43.366 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:43.366 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.933 [2024-11-26 17:25:13.859079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:43.933 [2024-11-26 17:25:13.859118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:43.933 [2024-11-26 17:25:13.859229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:43.933 [2024-11-26 17:25:13.859312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:43.933 [2024-11-26 17:25:13.859325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:43.933 17:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:44.192 /dev/nbd0 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.192 1+0 records in 00:28:44.192 1+0 records out 00:28:44.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303428 s, 13.5 MB/s 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:44.192 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:44.451 /dev/nbd1 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:44.451 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.452 1+0 records in 00:28:44.452 1+0 records out 00:28:44.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426953 s, 9.6 MB/s 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:44.452 17:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.711 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.970 17:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:45.228 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:45.228 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:45.228 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:45.228 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75405 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75405 ']' 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75405 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75405 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.229 killing process with pid 75405 00:28:45.229 Received shutdown signal, test time was about 60.000000 seconds 00:28:45.229 00:28:45.229 Latency(us) 00:28:45.229 [2024-11-26T17:25:15.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.229 [2024-11-26T17:25:15.343Z] =================================================================================================================== 00:28:45.229 [2024-11-26T17:25:15.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75405' 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75405 00:28:45.229 [2024-11-26 17:25:15.290505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:45.229 17:25:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75405 00:28:45.796 [2024-11-26 17:25:15.615713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:28:47.169 00:28:47.169 real 0m16.927s 00:28:47.169 user 0m18.470s 00:28:47.169 sys 0m3.847s 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.169 ************************************ 00:28:47.169 END TEST raid_rebuild_test 00:28:47.169 ************************************ 00:28:47.169 17:25:16 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:28:47.169 17:25:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:47.169 17:25:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.169 17:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:47.169 ************************************ 00:28:47.169 START TEST raid_rebuild_test_sb 00:28:47.169 ************************************ 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75840 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75840 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75840 ']' 00:28:47.169 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.170 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.170 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.170 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.170 17:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.170 [2024-11-26 17:25:17.042838] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:28:47.170 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:47.170 Zero copy mechanism will not be used. 00:28:47.170 [2024-11-26 17:25:17.043679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75840 ] 00:28:47.170 [2024-11-26 17:25:17.233076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.477 [2024-11-26 17:25:17.387596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.759 [2024-11-26 17:25:17.619169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:47.759 [2024-11-26 17:25:17.619480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 BaseBdev1_malloc 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 [2024-11-26 17:25:17.982716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:48.019 [2024-11-26 17:25:17.982821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:48.019 [2024-11-26 17:25:17.982847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:48.019 [2024-11-26 17:25:17.982875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:48.019 [2024-11-26 17:25:17.985604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:48.019 [2024-11-26 17:25:17.985651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:48.019 BaseBdev1 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 BaseBdev2_malloc 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 [2024-11-26 17:25:18.043305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:48.019 [2024-11-26 17:25:18.043390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:48.019 [2024-11-26 17:25:18.043420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:48.019 [2024-11-26 17:25:18.043435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:48.019 [2024-11-26 17:25:18.046209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:48.019 [2024-11-26 17:25:18.046257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:48.019 BaseBdev2 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 spare_malloc 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 spare_delay 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.019 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.019 [2024-11-26 17:25:18.128008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:48.019 [2024-11-26 17:25:18.128093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:48.019 [2024-11-26 17:25:18.128121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:48.019 [2024-11-26 17:25:18.128137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:48.019 [2024-11-26 17:25:18.131020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:48.019 [2024-11-26 17:25:18.131071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:48.280 spare 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.280 [2024-11-26 17:25:18.140193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:48.280 [2024-11-26 17:25:18.142691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:48.280 [2024-11-26 17:25:18.142913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:48.280 [2024-11-26 17:25:18.142933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:48.280 [2024-11-26 17:25:18.143254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:48.280 [2024-11-26 17:25:18.143475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:48.280 [2024-11-26 17:25:18.143496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:48.280 [2024-11-26 17:25:18.143720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:48.280 "name": "raid_bdev1", 00:28:48.280 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:48.280 "strip_size_kb": 0, 00:28:48.280 "state": "online", 00:28:48.280 "raid_level": "raid1", 00:28:48.280 "superblock": true, 00:28:48.280 "num_base_bdevs": 2, 00:28:48.280 "num_base_bdevs_discovered": 2, 00:28:48.280 "num_base_bdevs_operational": 2, 00:28:48.280 "base_bdevs_list": [ 00:28:48.280 { 00:28:48.280 "name": "BaseBdev1", 00:28:48.280 "uuid": "4624b991-cd6c-500b-8731-ceaa6114908a", 00:28:48.280 "is_configured": true, 00:28:48.280 "data_offset": 2048, 00:28:48.280 "data_size": 63488 00:28:48.280 }, 00:28:48.280 { 00:28:48.280 "name": "BaseBdev2", 00:28:48.280 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:48.280 "is_configured": true, 00:28:48.280 "data_offset": 2048, 00:28:48.280 "data_size": 63488 00:28:48.280 } 00:28:48.280 ] 00:28:48.280 }' 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:48.280 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:48.539 [2024-11-26 17:25:18.607851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.539 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:48.798 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:49.056 [2024-11-26 17:25:18.919130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:49.056 /dev/nbd0 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:49.056 1+0 records in 00:28:49.056 1+0 records out 00:28:49.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396594 s, 10.3 MB/s 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:49.056 17:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:55.690 63488+0 records in 00:28:55.690 63488+0 records out 00:28:55.690 32505856 bytes (33 MB, 31 MiB) copied, 5.51003 s, 5.9 MB/s 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:55.690 [2024-11-26 17:25:24.741989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.690 [2024-11-26 17:25:24.762059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:55.690 "name": "raid_bdev1", 00:28:55.690 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:55.690 "strip_size_kb": 0, 00:28:55.690 "state": "online", 00:28:55.690 "raid_level": "raid1", 00:28:55.690 "superblock": true, 00:28:55.690 "num_base_bdevs": 2, 00:28:55.690 "num_base_bdevs_discovered": 1, 00:28:55.690 "num_base_bdevs_operational": 1, 00:28:55.690 "base_bdevs_list": [ 00:28:55.690 { 00:28:55.690 "name": null, 00:28:55.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.690 "is_configured": false, 00:28:55.690 "data_offset": 0, 00:28:55.690 "data_size": 63488 00:28:55.690 }, 00:28:55.690 { 00:28:55.690 "name": "BaseBdev2", 00:28:55.690 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:55.690 "is_configured": true, 00:28:55.690 "data_offset": 2048, 00:28:55.690 "data_size": 63488 00:28:55.690 } 00:28:55.690 ] 00:28:55.690 }' 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:55.690 17:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.690 17:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:55.690 17:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.690 17:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.690 [2024-11-26 17:25:25.217790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:55.690 [2024-11-26 17:25:25.237354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:28:55.690 17:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.690 17:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:55.690 [2024-11-26 17:25:25.239660] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:56.284 "name": "raid_bdev1", 00:28:56.284 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:56.284 "strip_size_kb": 0, 00:28:56.284 "state": "online", 00:28:56.284 "raid_level": "raid1", 00:28:56.284 "superblock": true, 00:28:56.284 "num_base_bdevs": 2, 00:28:56.284 "num_base_bdevs_discovered": 2, 00:28:56.284 "num_base_bdevs_operational": 2, 00:28:56.284 "process": { 00:28:56.284 "type": "rebuild", 00:28:56.284 "target": "spare", 00:28:56.284 "progress": { 00:28:56.284 "blocks": 20480, 00:28:56.284 "percent": 32 00:28:56.284 } 00:28:56.284 }, 00:28:56.284 "base_bdevs_list": [ 00:28:56.284 { 00:28:56.284 "name": "spare", 00:28:56.284 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:28:56.284 "is_configured": true, 00:28:56.284 "data_offset": 2048, 00:28:56.284 "data_size": 63488 00:28:56.284 }, 00:28:56.284 { 00:28:56.284 "name": "BaseBdev2", 00:28:56.284 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:56.284 "is_configured": true, 00:28:56.284 "data_offset": 2048, 00:28:56.284 "data_size": 63488 00:28:56.284 } 00:28:56.284 ] 00:28:56.284 }' 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.284 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.284 [2024-11-26 17:25:26.395023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:56.544 [2024-11-26 17:25:26.446950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:56.544 [2024-11-26 17:25:26.447037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.544 [2024-11-26 17:25:26.447055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:56.544 [2024-11-26 17:25:26.447069] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:56.544 "name": "raid_bdev1", 00:28:56.544 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:56.544 "strip_size_kb": 0, 00:28:56.544 "state": "online", 00:28:56.544 "raid_level": "raid1", 00:28:56.544 "superblock": true, 00:28:56.544 "num_base_bdevs": 2, 00:28:56.544 "num_base_bdevs_discovered": 1, 00:28:56.544 "num_base_bdevs_operational": 1, 00:28:56.544 "base_bdevs_list": [ 00:28:56.544 { 00:28:56.544 "name": null, 00:28:56.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.544 "is_configured": false, 00:28:56.544 "data_offset": 0, 00:28:56.544 "data_size": 63488 00:28:56.544 }, 00:28:56.544 { 00:28:56.544 "name": "BaseBdev2", 00:28:56.544 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:56.544 "is_configured": true, 00:28:56.544 "data_offset": 2048, 00:28:56.544 "data_size": 63488 00:28:56.544 } 00:28:56.544 ] 00:28:56.544 }' 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:56.544 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:57.112 "name": "raid_bdev1", 00:28:57.112 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:57.112 "strip_size_kb": 0, 00:28:57.112 "state": "online", 00:28:57.112 "raid_level": "raid1", 00:28:57.112 "superblock": true, 00:28:57.112 "num_base_bdevs": 2, 00:28:57.112 "num_base_bdevs_discovered": 1, 00:28:57.112 "num_base_bdevs_operational": 1, 00:28:57.112 "base_bdevs_list": [ 00:28:57.112 { 00:28:57.112 "name": null, 00:28:57.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.112 "is_configured": false, 00:28:57.112 "data_offset": 0, 00:28:57.112 "data_size": 63488 00:28:57.112 }, 00:28:57.112 { 00:28:57.112 "name": "BaseBdev2", 00:28:57.112 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:57.112 "is_configured": true, 00:28:57.112 "data_offset": 2048, 00:28:57.112 "data_size": 63488 00:28:57.112 } 00:28:57.112 ] 00:28:57.112 }' 00:28:57.112 17:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.112 [2024-11-26 17:25:27.071601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:57.112 [2024-11-26 17:25:27.090243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.112 17:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:57.112 [2024-11-26 17:25:27.092697] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:58.048 "name": "raid_bdev1", 00:28:58.048 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:58.048 "strip_size_kb": 0, 00:28:58.048 "state": "online", 00:28:58.048 "raid_level": "raid1", 00:28:58.048 "superblock": true, 00:28:58.048 "num_base_bdevs": 2, 00:28:58.048 "num_base_bdevs_discovered": 2, 00:28:58.048 "num_base_bdevs_operational": 2, 00:28:58.048 "process": { 00:28:58.048 "type": "rebuild", 00:28:58.048 "target": "spare", 00:28:58.048 "progress": { 00:28:58.048 "blocks": 20480, 00:28:58.048 "percent": 32 00:28:58.048 } 00:28:58.048 }, 00:28:58.048 "base_bdevs_list": [ 00:28:58.048 { 00:28:58.048 "name": "spare", 00:28:58.048 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:28:58.048 "is_configured": true, 00:28:58.048 "data_offset": 2048, 00:28:58.048 "data_size": 63488 00:28:58.048 }, 00:28:58.048 { 00:28:58.048 "name": "BaseBdev2", 00:28:58.048 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:58.048 "is_configured": true, 00:28:58.048 "data_offset": 2048, 00:28:58.048 "data_size": 63488 00:28:58.048 } 00:28:58.048 ] 00:28:58.048 }' 00:28:58.048 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:58.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:58.306 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:58.307 "name": "raid_bdev1", 00:28:58.307 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:58.307 "strip_size_kb": 0, 00:28:58.307 "state": "online", 00:28:58.307 "raid_level": "raid1", 00:28:58.307 "superblock": true, 00:28:58.307 "num_base_bdevs": 2, 00:28:58.307 "num_base_bdevs_discovered": 2, 00:28:58.307 "num_base_bdevs_operational": 2, 00:28:58.307 "process": { 00:28:58.307 "type": "rebuild", 00:28:58.307 "target": "spare", 00:28:58.307 "progress": { 00:28:58.307 "blocks": 22528, 00:28:58.307 "percent": 35 00:28:58.307 } 00:28:58.307 }, 00:28:58.307 "base_bdevs_list": [ 00:28:58.307 { 00:28:58.307 "name": "spare", 00:28:58.307 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:28:58.307 "is_configured": true, 00:28:58.307 "data_offset": 2048, 00:28:58.307 "data_size": 63488 00:28:58.307 }, 00:28:58.307 { 00:28:58.307 "name": "BaseBdev2", 00:28:58.307 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:58.307 "is_configured": true, 00:28:58.307 "data_offset": 2048, 00:28:58.307 "data_size": 63488 00:28:58.307 } 00:28:58.307 ] 00:28:58.307 }' 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:58.307 17:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:59.700 "name": "raid_bdev1", 00:28:59.700 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:28:59.700 "strip_size_kb": 0, 00:28:59.700 "state": "online", 00:28:59.700 "raid_level": "raid1", 00:28:59.700 "superblock": true, 00:28:59.700 "num_base_bdevs": 2, 00:28:59.700 "num_base_bdevs_discovered": 2, 00:28:59.700 "num_base_bdevs_operational": 2, 00:28:59.700 "process": { 00:28:59.700 "type": "rebuild", 00:28:59.700 "target": "spare", 00:28:59.700 "progress": { 00:28:59.700 "blocks": 45056, 00:28:59.700 "percent": 70 00:28:59.700 } 00:28:59.700 }, 00:28:59.700 "base_bdevs_list": [ 00:28:59.700 { 00:28:59.700 "name": "spare", 00:28:59.700 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:28:59.700 "is_configured": true, 00:28:59.700 "data_offset": 2048, 00:28:59.700 "data_size": 63488 00:28:59.700 }, 00:28:59.700 { 00:28:59.700 "name": "BaseBdev2", 00:28:59.700 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:28:59.700 "is_configured": true, 00:28:59.700 "data_offset": 2048, 00:28:59.700 "data_size": 63488 00:28:59.700 } 00:28:59.700 ] 00:28:59.700 }' 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.700 17:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:00.303 [2024-11-26 17:25:30.211256] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:00.303 [2024-11-26 17:25:30.211355] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:00.303 [2024-11-26 17:25:30.211506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.561 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:00.561 "name": "raid_bdev1", 00:29:00.561 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:00.561 "strip_size_kb": 0, 00:29:00.561 "state": "online", 00:29:00.561 "raid_level": "raid1", 00:29:00.561 "superblock": true, 00:29:00.561 "num_base_bdevs": 2, 00:29:00.561 "num_base_bdevs_discovered": 2, 00:29:00.561 "num_base_bdevs_operational": 2, 00:29:00.561 "base_bdevs_list": [ 00:29:00.561 { 00:29:00.561 "name": "spare", 00:29:00.561 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:00.561 "is_configured": true, 00:29:00.562 "data_offset": 2048, 00:29:00.562 "data_size": 63488 00:29:00.562 }, 00:29:00.562 { 00:29:00.562 "name": "BaseBdev2", 00:29:00.562 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:00.562 "is_configured": true, 00:29:00.562 "data_offset": 2048, 00:29:00.562 "data_size": 63488 00:29:00.562 } 00:29:00.562 ] 00:29:00.562 }' 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.562 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:00.821 "name": "raid_bdev1", 00:29:00.821 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:00.821 "strip_size_kb": 0, 00:29:00.821 "state": "online", 00:29:00.821 "raid_level": "raid1", 00:29:00.821 "superblock": true, 00:29:00.821 "num_base_bdevs": 2, 00:29:00.821 "num_base_bdevs_discovered": 2, 00:29:00.821 "num_base_bdevs_operational": 2, 00:29:00.821 "base_bdevs_list": [ 00:29:00.821 { 00:29:00.821 "name": "spare", 00:29:00.821 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:00.821 "is_configured": true, 00:29:00.821 "data_offset": 2048, 00:29:00.821 "data_size": 63488 00:29:00.821 }, 00:29:00.821 { 00:29:00.821 "name": "BaseBdev2", 00:29:00.821 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:00.821 "is_configured": true, 00:29:00.821 "data_offset": 2048, 00:29:00.821 "data_size": 63488 00:29:00.821 } 00:29:00.821 ] 00:29:00.821 }' 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:00.821 "name": "raid_bdev1", 00:29:00.821 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:00.821 "strip_size_kb": 0, 00:29:00.821 "state": "online", 00:29:00.821 "raid_level": "raid1", 00:29:00.821 "superblock": true, 00:29:00.821 "num_base_bdevs": 2, 00:29:00.821 "num_base_bdevs_discovered": 2, 00:29:00.821 "num_base_bdevs_operational": 2, 00:29:00.821 "base_bdevs_list": [ 00:29:00.821 { 00:29:00.821 "name": "spare", 00:29:00.821 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:00.821 "is_configured": true, 00:29:00.821 "data_offset": 2048, 00:29:00.821 "data_size": 63488 00:29:00.821 }, 00:29:00.821 { 00:29:00.821 "name": "BaseBdev2", 00:29:00.821 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:00.821 "is_configured": true, 00:29:00.821 "data_offset": 2048, 00:29:00.821 "data_size": 63488 00:29:00.821 } 00:29:00.821 ] 00:29:00.821 }' 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:00.821 17:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.388 [2024-11-26 17:25:31.233893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:01.388 [2024-11-26 17:25:31.233932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:01.388 [2024-11-26 17:25:31.234038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:01.388 [2024-11-26 17:25:31.234116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:01.388 [2024-11-26 17:25:31.234132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:01.388 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:01.647 /dev/nbd0 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:01.647 1+0 records in 00:29:01.647 1+0 records out 00:29:01.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393891 s, 10.4 MB/s 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:01.647 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:01.906 /dev/nbd1 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:01.906 1+0 records in 00:29:01.906 1+0 records out 00:29:01.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481338 s, 8.5 MB/s 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:01.906 17:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:02.165 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:02.423 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.683 [2024-11-26 17:25:32.625360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:02.683 [2024-11-26 17:25:32.625447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.683 [2024-11-26 17:25:32.625486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:02.683 [2024-11-26 17:25:32.625500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.683 [2024-11-26 17:25:32.628410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.683 [2024-11-26 17:25:32.628618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:02.683 [2024-11-26 17:25:32.628789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:02.683 [2024-11-26 17:25:32.628868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.683 [2024-11-26 17:25:32.629063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:02.683 spare 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.683 [2024-11-26 17:25:32.729030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:02.683 [2024-11-26 17:25:32.729106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:02.683 [2024-11-26 17:25:32.729825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:29:02.683 [2024-11-26 17:25:32.730156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:02.683 [2024-11-26 17:25:32.730210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:02.683 [2024-11-26 17:25:32.730661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.683 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.684 "name": "raid_bdev1", 00:29:02.684 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:02.684 "strip_size_kb": 0, 00:29:02.684 "state": "online", 00:29:02.684 "raid_level": "raid1", 00:29:02.684 "superblock": true, 00:29:02.684 "num_base_bdevs": 2, 00:29:02.684 "num_base_bdevs_discovered": 2, 00:29:02.684 "num_base_bdevs_operational": 2, 00:29:02.684 "base_bdevs_list": [ 00:29:02.684 { 00:29:02.684 "name": "spare", 00:29:02.684 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:02.684 "is_configured": true, 00:29:02.684 "data_offset": 2048, 00:29:02.684 "data_size": 63488 00:29:02.684 }, 00:29:02.684 { 00:29:02.684 "name": "BaseBdev2", 00:29:02.684 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:02.684 "is_configured": true, 00:29:02.684 "data_offset": 2048, 00:29:02.684 "data_size": 63488 00:29:02.684 } 00:29:02.684 ] 00:29:02.684 }' 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.684 17:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:03.253 "name": "raid_bdev1", 00:29:03.253 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:03.253 "strip_size_kb": 0, 00:29:03.253 "state": "online", 00:29:03.253 "raid_level": "raid1", 00:29:03.253 "superblock": true, 00:29:03.253 "num_base_bdevs": 2, 00:29:03.253 "num_base_bdevs_discovered": 2, 00:29:03.253 "num_base_bdevs_operational": 2, 00:29:03.253 "base_bdevs_list": [ 00:29:03.253 { 00:29:03.253 "name": "spare", 00:29:03.253 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:03.253 "is_configured": true, 00:29:03.253 "data_offset": 2048, 00:29:03.253 "data_size": 63488 00:29:03.253 }, 00:29:03.253 { 00:29:03.253 "name": "BaseBdev2", 00:29:03.253 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:03.253 "is_configured": true, 00:29:03.253 "data_offset": 2048, 00:29:03.253 "data_size": 63488 00:29:03.253 } 00:29:03.253 ] 00:29:03.253 }' 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.253 [2024-11-26 17:25:33.353762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.253 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.513 "name": "raid_bdev1", 00:29:03.513 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:03.513 "strip_size_kb": 0, 00:29:03.513 "state": "online", 00:29:03.513 "raid_level": "raid1", 00:29:03.513 "superblock": true, 00:29:03.513 "num_base_bdevs": 2, 00:29:03.513 "num_base_bdevs_discovered": 1, 00:29:03.513 "num_base_bdevs_operational": 1, 00:29:03.513 "base_bdevs_list": [ 00:29:03.513 { 00:29:03.513 "name": null, 00:29:03.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.513 "is_configured": false, 00:29:03.513 "data_offset": 0, 00:29:03.513 "data_size": 63488 00:29:03.513 }, 00:29:03.513 { 00:29:03.513 "name": "BaseBdev2", 00:29:03.513 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:03.513 "is_configured": true, 00:29:03.513 "data_offset": 2048, 00:29:03.513 "data_size": 63488 00:29:03.513 } 00:29:03.513 ] 00:29:03.513 }' 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.513 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.772 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:03.772 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.772 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.772 [2024-11-26 17:25:33.773447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:03.772 [2024-11-26 17:25:33.773876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:03.772 [2024-11-26 17:25:33.774039] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:03.772 [2024-11-26 17:25:33.774168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:03.772 [2024-11-26 17:25:33.792589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:29:03.772 17:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.772 17:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:03.772 [2024-11-26 17:25:33.795214] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.711 17:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:04.971 "name": "raid_bdev1", 00:29:04.971 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:04.971 "strip_size_kb": 0, 00:29:04.971 "state": "online", 00:29:04.971 "raid_level": "raid1", 00:29:04.971 "superblock": true, 00:29:04.971 "num_base_bdevs": 2, 00:29:04.971 "num_base_bdevs_discovered": 2, 00:29:04.971 "num_base_bdevs_operational": 2, 00:29:04.971 "process": { 00:29:04.971 "type": "rebuild", 00:29:04.971 "target": "spare", 00:29:04.971 "progress": { 00:29:04.971 "blocks": 20480, 00:29:04.971 "percent": 32 00:29:04.971 } 00:29:04.971 }, 00:29:04.971 "base_bdevs_list": [ 00:29:04.971 { 00:29:04.971 "name": "spare", 00:29:04.971 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:04.971 "is_configured": true, 00:29:04.971 "data_offset": 2048, 00:29:04.971 "data_size": 63488 00:29:04.971 }, 00:29:04.971 { 00:29:04.971 "name": "BaseBdev2", 00:29:04.971 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:04.971 "is_configured": true, 00:29:04.971 "data_offset": 2048, 00:29:04.971 "data_size": 63488 00:29:04.971 } 00:29:04.971 ] 00:29:04.971 }' 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.971 17:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.971 [2024-11-26 17:25:34.931238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.971 [2024-11-26 17:25:35.002735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:04.971 [2024-11-26 17:25:35.002852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.971 [2024-11-26 17:25:35.002870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.971 [2024-11-26 17:25:35.002884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.971 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.230 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.230 "name": "raid_bdev1", 00:29:05.230 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:05.230 "strip_size_kb": 0, 00:29:05.230 "state": "online", 00:29:05.230 "raid_level": "raid1", 00:29:05.230 "superblock": true, 00:29:05.230 "num_base_bdevs": 2, 00:29:05.230 "num_base_bdevs_discovered": 1, 00:29:05.230 "num_base_bdevs_operational": 1, 00:29:05.230 "base_bdevs_list": [ 00:29:05.230 { 00:29:05.230 "name": null, 00:29:05.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.230 "is_configured": false, 00:29:05.230 "data_offset": 0, 00:29:05.230 "data_size": 63488 00:29:05.230 }, 00:29:05.230 { 00:29:05.230 "name": "BaseBdev2", 00:29:05.230 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:05.230 "is_configured": true, 00:29:05.230 "data_offset": 2048, 00:29:05.230 "data_size": 63488 00:29:05.230 } 00:29:05.230 ] 00:29:05.230 }' 00:29:05.230 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.230 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.490 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:05.490 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.490 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.490 [2024-11-26 17:25:35.437014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:05.490 [2024-11-26 17:25:35.437105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.490 [2024-11-26 17:25:35.437133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:05.490 [2024-11-26 17:25:35.437162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.490 [2024-11-26 17:25:35.437767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.490 [2024-11-26 17:25:35.437797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:05.490 [2024-11-26 17:25:35.437914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:05.490 [2024-11-26 17:25:35.437934] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:05.490 [2024-11-26 17:25:35.437949] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:05.490 [2024-11-26 17:25:35.437979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.490 [2024-11-26 17:25:35.456494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:29:05.490 spare 00:29:05.490 17:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.490 17:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:05.490 [2024-11-26 17:25:35.459031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:06.426 "name": "raid_bdev1", 00:29:06.426 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:06.426 "strip_size_kb": 0, 00:29:06.426 "state": "online", 00:29:06.426 "raid_level": "raid1", 00:29:06.426 "superblock": true, 00:29:06.426 "num_base_bdevs": 2, 00:29:06.426 "num_base_bdevs_discovered": 2, 00:29:06.426 "num_base_bdevs_operational": 2, 00:29:06.426 "process": { 00:29:06.426 "type": "rebuild", 00:29:06.426 "target": "spare", 00:29:06.426 "progress": { 00:29:06.426 "blocks": 20480, 00:29:06.426 "percent": 32 00:29:06.426 } 00:29:06.426 }, 00:29:06.426 "base_bdevs_list": [ 00:29:06.426 { 00:29:06.426 "name": "spare", 00:29:06.426 "uuid": "6d8d226a-0134-5079-aa78-8420159427a7", 00:29:06.426 "is_configured": true, 00:29:06.426 "data_offset": 2048, 00:29:06.426 "data_size": 63488 00:29:06.426 }, 00:29:06.426 { 00:29:06.426 "name": "BaseBdev2", 00:29:06.426 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:06.426 "is_configured": true, 00:29:06.426 "data_offset": 2048, 00:29:06.426 "data_size": 63488 00:29:06.426 } 00:29:06.426 ] 00:29:06.426 }' 00:29:06.426 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.686 [2024-11-26 17:25:36.622334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.686 [2024-11-26 17:25:36.666192] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.686 [2024-11-26 17:25:36.666421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.686 [2024-11-26 17:25:36.666542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.686 [2024-11-26 17:25:36.666588] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:06.686 "name": "raid_bdev1", 00:29:06.686 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:06.686 "strip_size_kb": 0, 00:29:06.686 "state": "online", 00:29:06.686 "raid_level": "raid1", 00:29:06.686 "superblock": true, 00:29:06.686 "num_base_bdevs": 2, 00:29:06.686 "num_base_bdevs_discovered": 1, 00:29:06.686 "num_base_bdevs_operational": 1, 00:29:06.686 "base_bdevs_list": [ 00:29:06.686 { 00:29:06.686 "name": null, 00:29:06.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.686 "is_configured": false, 00:29:06.686 "data_offset": 0, 00:29:06.686 "data_size": 63488 00:29:06.686 }, 00:29:06.686 { 00:29:06.686 "name": "BaseBdev2", 00:29:06.686 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:06.686 "is_configured": true, 00:29:06.686 "data_offset": 2048, 00:29:06.686 "data_size": 63488 00:29:06.686 } 00:29:06.686 ] 00:29:06.686 }' 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:06.686 17:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:07.254 "name": "raid_bdev1", 00:29:07.254 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:07.254 "strip_size_kb": 0, 00:29:07.254 "state": "online", 00:29:07.254 "raid_level": "raid1", 00:29:07.254 "superblock": true, 00:29:07.254 "num_base_bdevs": 2, 00:29:07.254 "num_base_bdevs_discovered": 1, 00:29:07.254 "num_base_bdevs_operational": 1, 00:29:07.254 "base_bdevs_list": [ 00:29:07.254 { 00:29:07.254 "name": null, 00:29:07.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.254 "is_configured": false, 00:29:07.254 "data_offset": 0, 00:29:07.254 "data_size": 63488 00:29:07.254 }, 00:29:07.254 { 00:29:07.254 "name": "BaseBdev2", 00:29:07.254 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:07.254 "is_configured": true, 00:29:07.254 "data_offset": 2048, 00:29:07.254 "data_size": 63488 00:29:07.254 } 00:29:07.254 ] 00:29:07.254 }' 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.254 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.254 [2024-11-26 17:25:37.366649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:07.513 [2024-11-26 17:25:37.366857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.513 [2024-11-26 17:25:37.366907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:07.513 [2024-11-26 17:25:37.366933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.513 [2024-11-26 17:25:37.367477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.513 [2024-11-26 17:25:37.367498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:07.513 [2024-11-26 17:25:37.367616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:07.513 [2024-11-26 17:25:37.367635] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:07.513 [2024-11-26 17:25:37.367652] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:07.513 [2024-11-26 17:25:37.367666] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:07.513 BaseBdev1 00:29:07.513 17:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.513 17:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.449 "name": "raid_bdev1", 00:29:08.449 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:08.449 "strip_size_kb": 0, 00:29:08.449 "state": "online", 00:29:08.449 "raid_level": "raid1", 00:29:08.449 "superblock": true, 00:29:08.449 "num_base_bdevs": 2, 00:29:08.449 "num_base_bdevs_discovered": 1, 00:29:08.449 "num_base_bdevs_operational": 1, 00:29:08.449 "base_bdevs_list": [ 00:29:08.449 { 00:29:08.449 "name": null, 00:29:08.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.449 "is_configured": false, 00:29:08.449 "data_offset": 0, 00:29:08.449 "data_size": 63488 00:29:08.449 }, 00:29:08.449 { 00:29:08.449 "name": "BaseBdev2", 00:29:08.449 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:08.449 "is_configured": true, 00:29:08.449 "data_offset": 2048, 00:29:08.449 "data_size": 63488 00:29:08.449 } 00:29:08.449 ] 00:29:08.449 }' 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.449 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.709 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:08.968 "name": "raid_bdev1", 00:29:08.968 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:08.968 "strip_size_kb": 0, 00:29:08.968 "state": "online", 00:29:08.968 "raid_level": "raid1", 00:29:08.968 "superblock": true, 00:29:08.968 "num_base_bdevs": 2, 00:29:08.968 "num_base_bdevs_discovered": 1, 00:29:08.968 "num_base_bdevs_operational": 1, 00:29:08.968 "base_bdevs_list": [ 00:29:08.968 { 00:29:08.968 "name": null, 00:29:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.968 "is_configured": false, 00:29:08.968 "data_offset": 0, 00:29:08.968 "data_size": 63488 00:29:08.968 }, 00:29:08.968 { 00:29:08.968 "name": "BaseBdev2", 00:29:08.968 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:08.968 "is_configured": true, 00:29:08.968 "data_offset": 2048, 00:29:08.968 "data_size": 63488 00:29:08.968 } 00:29:08.968 ] 00:29:08.968 }' 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.968 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.969 [2024-11-26 17:25:38.961796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:08.969 [2024-11-26 17:25:38.962001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:08.969 [2024-11-26 17:25:38.962022] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:08.969 request: 00:29:08.969 { 00:29:08.969 "base_bdev": "BaseBdev1", 00:29:08.969 "raid_bdev": "raid_bdev1", 00:29:08.969 "method": "bdev_raid_add_base_bdev", 00:29:08.969 "req_id": 1 00:29:08.969 } 00:29:08.969 Got JSON-RPC error response 00:29:08.969 response: 00:29:08.969 { 00:29:08.969 "code": -22, 00:29:08.969 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:08.969 } 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.969 17:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.905 17:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.905 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.905 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.905 "name": "raid_bdev1", 00:29:09.905 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:09.905 "strip_size_kb": 0, 00:29:09.905 "state": "online", 00:29:09.905 "raid_level": "raid1", 00:29:09.905 "superblock": true, 00:29:09.905 "num_base_bdevs": 2, 00:29:09.905 "num_base_bdevs_discovered": 1, 00:29:09.905 "num_base_bdevs_operational": 1, 00:29:09.905 "base_bdevs_list": [ 00:29:09.905 { 00:29:09.905 "name": null, 00:29:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.905 "is_configured": false, 00:29:09.905 "data_offset": 0, 00:29:09.905 "data_size": 63488 00:29:09.905 }, 00:29:09.905 { 00:29:09.905 "name": "BaseBdev2", 00:29:09.905 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:09.905 "is_configured": true, 00:29:09.905 "data_offset": 2048, 00:29:09.905 "data_size": 63488 00:29:09.905 } 00:29:09.905 ] 00:29:09.905 }' 00:29:09.905 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.905 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:10.473 "name": "raid_bdev1", 00:29:10.473 "uuid": "1187e7ed-862d-4794-8b8d-c9fef2dd6f5e", 00:29:10.473 "strip_size_kb": 0, 00:29:10.473 "state": "online", 00:29:10.473 "raid_level": "raid1", 00:29:10.473 "superblock": true, 00:29:10.473 "num_base_bdevs": 2, 00:29:10.473 "num_base_bdevs_discovered": 1, 00:29:10.473 "num_base_bdevs_operational": 1, 00:29:10.473 "base_bdevs_list": [ 00:29:10.473 { 00:29:10.473 "name": null, 00:29:10.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.473 "is_configured": false, 00:29:10.473 "data_offset": 0, 00:29:10.473 "data_size": 63488 00:29:10.473 }, 00:29:10.473 { 00:29:10.473 "name": "BaseBdev2", 00:29:10.473 "uuid": "a990c96a-6f61-5cf7-bb25-65d75a0ddb6e", 00:29:10.473 "is_configured": true, 00:29:10.473 "data_offset": 2048, 00:29:10.473 "data_size": 63488 00:29:10.473 } 00:29:10.473 ] 00:29:10.473 }' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75840 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75840 ']' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75840 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.473 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75840 00:29:10.732 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.732 killing process with pid 75840 00:29:10.732 Received shutdown signal, test time was about 60.000000 seconds 00:29:10.732 00:29:10.732 Latency(us) 00:29:10.732 [2024-11-26T17:25:40.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.732 [2024-11-26T17:25:40.846Z] =================================================================================================================== 00:29:10.732 [2024-11-26T17:25:40.846Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:10.732 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.732 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75840' 00:29:10.732 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75840 00:29:10.732 [2024-11-26 17:25:40.587068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:10.732 [2024-11-26 17:25:40.587230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:10.732 17:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75840 00:29:10.732 [2024-11-26 17:25:40.587289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:10.732 [2024-11-26 17:25:40.587316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:10.991 [2024-11-26 17:25:40.914523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:29:12.367 00:29:12.367 real 0m25.231s 00:29:12.367 user 0m29.721s 00:29:12.367 sys 0m5.088s 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.367 ************************************ 00:29:12.367 END TEST raid_rebuild_test_sb 00:29:12.367 ************************************ 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.367 17:25:42 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:29:12.367 17:25:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:12.367 17:25:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.367 17:25:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:12.367 ************************************ 00:29:12.367 START TEST raid_rebuild_test_io 00:29:12.367 ************************************ 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:12.367 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76586 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76586 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76586 ']' 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.368 17:25:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.368 [2024-11-26 17:25:42.354873] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:29:12.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:12.368 Zero copy mechanism will not be used. 00:29:12.368 [2024-11-26 17:25:42.355224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76586 ] 00:29:12.627 [2024-11-26 17:25:42.535385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.627 [2024-11-26 17:25:42.680389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.885 [2024-11-26 17:25:42.905305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.885 [2024-11-26 17:25:42.905380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.144 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 BaseBdev1_malloc 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 [2024-11-26 17:25:43.268605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:13.404 [2024-11-26 17:25:43.268679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.404 [2024-11-26 17:25:43.268706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:13.404 [2024-11-26 17:25:43.268721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.404 [2024-11-26 17:25:43.271385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.404 [2024-11-26 17:25:43.271431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:13.404 BaseBdev1 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 BaseBdev2_malloc 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 [2024-11-26 17:25:43.329689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:13.404 [2024-11-26 17:25:43.329897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.404 [2024-11-26 17:25:43.329936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:13.404 [2024-11-26 17:25:43.329952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.404 [2024-11-26 17:25:43.332524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.404 [2024-11-26 17:25:43.332580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:13.404 BaseBdev2 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 spare_malloc 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.404 spare_delay 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.404 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.405 [2024-11-26 17:25:43.412122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:13.405 [2024-11-26 17:25:43.412192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.405 [2024-11-26 17:25:43.412216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:13.405 [2024-11-26 17:25:43.412232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.405 [2024-11-26 17:25:43.415322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.405 [2024-11-26 17:25:43.415491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:13.405 spare 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.405 [2024-11-26 17:25:43.424205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:13.405 [2024-11-26 17:25:43.426537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:13.405 [2024-11-26 17:25:43.426637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:13.405 [2024-11-26 17:25:43.426655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:13.405 [2024-11-26 17:25:43.426939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:13.405 [2024-11-26 17:25:43.427105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:13.405 [2024-11-26 17:25:43.427119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:13.405 [2024-11-26 17:25:43.427294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.405 "name": "raid_bdev1", 00:29:13.405 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:13.405 "strip_size_kb": 0, 00:29:13.405 "state": "online", 00:29:13.405 "raid_level": "raid1", 00:29:13.405 "superblock": false, 00:29:13.405 "num_base_bdevs": 2, 00:29:13.405 "num_base_bdevs_discovered": 2, 00:29:13.405 "num_base_bdevs_operational": 2, 00:29:13.405 "base_bdevs_list": [ 00:29:13.405 { 00:29:13.405 "name": "BaseBdev1", 00:29:13.405 "uuid": "61086a49-2ed6-5331-826b-ea578ab73e63", 00:29:13.405 "is_configured": true, 00:29:13.405 "data_offset": 0, 00:29:13.405 "data_size": 65536 00:29:13.405 }, 00:29:13.405 { 00:29:13.405 "name": "BaseBdev2", 00:29:13.405 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:13.405 "is_configured": true, 00:29:13.405 "data_offset": 0, 00:29:13.405 "data_size": 65536 00:29:13.405 } 00:29:13.405 ] 00:29:13.405 }' 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.405 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 [2024-11-26 17:25:43.875930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 [2024-11-26 17:25:43.951492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.975 17:25:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.975 17:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.975 "name": "raid_bdev1", 00:29:13.975 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:13.975 "strip_size_kb": 0, 00:29:13.975 "state": "online", 00:29:13.975 "raid_level": "raid1", 00:29:13.975 "superblock": false, 00:29:13.975 "num_base_bdevs": 2, 00:29:13.975 "num_base_bdevs_discovered": 1, 00:29:13.975 "num_base_bdevs_operational": 1, 00:29:13.975 "base_bdevs_list": [ 00:29:13.975 { 00:29:13.975 "name": null, 00:29:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.975 "is_configured": false, 00:29:13.975 "data_offset": 0, 00:29:13.975 "data_size": 65536 00:29:13.975 }, 00:29:13.975 { 00:29:13.975 "name": "BaseBdev2", 00:29:13.975 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:13.975 "is_configured": true, 00:29:13.975 "data_offset": 0, 00:29:13.975 "data_size": 65536 00:29:13.975 } 00:29:13.975 ] 00:29:13.975 }' 00:29:13.975 17:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.975 17:25:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 [2024-11-26 17:25:44.036483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:13.975 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:13.975 Zero copy mechanism will not be used. 00:29:13.975 Running I/O for 60 seconds... 00:29:14.558 17:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:14.558 17:25:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.558 17:25:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.558 [2024-11-26 17:25:44.375130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.558 17:25:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.558 17:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:14.558 [2024-11-26 17:25:44.425543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:14.558 [2024-11-26 17:25:44.428059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:14.558 [2024-11-26 17:25:44.557326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:14.817 [2024-11-26 17:25:44.671200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:14.817 [2024-11-26 17:25:44.671969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:15.075 [2024-11-26 17:25:45.013295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:15.075 [2024-11-26 17:25:45.020162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:15.333 147.00 IOPS, 441.00 MiB/s [2024-11-26T17:25:45.447Z] [2024-11-26 17:25:45.248284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:15.333 [2024-11-26 17:25:45.249028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.333 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:15.591 "name": "raid_bdev1", 00:29:15.591 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:15.591 "strip_size_kb": 0, 00:29:15.591 "state": "online", 00:29:15.591 "raid_level": "raid1", 00:29:15.591 "superblock": false, 00:29:15.591 "num_base_bdevs": 2, 00:29:15.591 "num_base_bdevs_discovered": 2, 00:29:15.591 "num_base_bdevs_operational": 2, 00:29:15.591 "process": { 00:29:15.591 "type": "rebuild", 00:29:15.591 "target": "spare", 00:29:15.591 "progress": { 00:29:15.591 "blocks": 12288, 00:29:15.591 "percent": 18 00:29:15.591 } 00:29:15.591 }, 00:29:15.591 "base_bdevs_list": [ 00:29:15.591 { 00:29:15.591 "name": "spare", 00:29:15.591 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:15.591 "is_configured": true, 00:29:15.591 "data_offset": 0, 00:29:15.591 "data_size": 65536 00:29:15.591 }, 00:29:15.591 { 00:29:15.591 "name": "BaseBdev2", 00:29:15.591 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:15.591 "is_configured": true, 00:29:15.591 "data_offset": 0, 00:29:15.591 "data_size": 65536 00:29:15.591 } 00:29:15.591 ] 00:29:15.591 }' 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.591 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:15.592 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.592 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:15.592 [2024-11-26 17:25:45.536658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.592 [2024-11-26 17:25:45.594132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:15.592 [2024-11-26 17:25:45.695542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:15.851 [2024-11-26 17:25:45.704948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.851 [2024-11-26 17:25:45.705192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.851 [2024-11-26 17:25:45.705252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:15.851 [2024-11-26 17:25:45.745703] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:15.851 "name": "raid_bdev1", 00:29:15.851 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:15.851 "strip_size_kb": 0, 00:29:15.851 "state": "online", 00:29:15.851 "raid_level": "raid1", 00:29:15.851 "superblock": false, 00:29:15.851 "num_base_bdevs": 2, 00:29:15.851 "num_base_bdevs_discovered": 1, 00:29:15.851 "num_base_bdevs_operational": 1, 00:29:15.851 "base_bdevs_list": [ 00:29:15.851 { 00:29:15.851 "name": null, 00:29:15.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.851 "is_configured": false, 00:29:15.851 "data_offset": 0, 00:29:15.851 "data_size": 65536 00:29:15.851 }, 00:29:15.851 { 00:29:15.851 "name": "BaseBdev2", 00:29:15.851 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:15.851 "is_configured": true, 00:29:15.851 "data_offset": 0, 00:29:15.851 "data_size": 65536 00:29:15.851 } 00:29:15.851 ] 00:29:15.851 }' 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:15.851 17:25:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:16.109 138.00 IOPS, 414.00 MiB/s [2024-11-26T17:25:46.224Z] 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:16.110 "name": "raid_bdev1", 00:29:16.110 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:16.110 "strip_size_kb": 0, 00:29:16.110 "state": "online", 00:29:16.110 "raid_level": "raid1", 00:29:16.110 "superblock": false, 00:29:16.110 "num_base_bdevs": 2, 00:29:16.110 "num_base_bdevs_discovered": 1, 00:29:16.110 "num_base_bdevs_operational": 1, 00:29:16.110 "base_bdevs_list": [ 00:29:16.110 { 00:29:16.110 "name": null, 00:29:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.110 "is_configured": false, 00:29:16.110 "data_offset": 0, 00:29:16.110 "data_size": 65536 00:29:16.110 }, 00:29:16.110 { 00:29:16.110 "name": "BaseBdev2", 00:29:16.110 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:16.110 "is_configured": true, 00:29:16.110 "data_offset": 0, 00:29:16.110 "data_size": 65536 00:29:16.110 } 00:29:16.110 ] 00:29:16.110 }' 00:29:16.110 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 [2024-11-26 17:25:46.302224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.369 17:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:16.369 [2024-11-26 17:25:46.344758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:16.369 [2024-11-26 17:25:46.347126] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:16.369 [2024-11-26 17:25:46.449571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:16.369 [2024-11-26 17:25:46.450427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:16.628 [2024-11-26 17:25:46.571504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:16.628 [2024-11-26 17:25:46.572222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:16.886 [2024-11-26 17:25:46.906351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:16.886 [2024-11-26 17:25:46.907090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:17.214 145.33 IOPS, 436.00 MiB/s [2024-11-26T17:25:47.328Z] [2024-11-26 17:25:47.117189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:17.214 [2024-11-26 17:25:47.117738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:17.498 "name": "raid_bdev1", 00:29:17.498 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:17.498 "strip_size_kb": 0, 00:29:17.498 "state": "online", 00:29:17.498 "raid_level": "raid1", 00:29:17.498 "superblock": false, 00:29:17.498 "num_base_bdevs": 2, 00:29:17.498 "num_base_bdevs_discovered": 2, 00:29:17.498 "num_base_bdevs_operational": 2, 00:29:17.498 "process": { 00:29:17.498 "type": "rebuild", 00:29:17.498 "target": "spare", 00:29:17.498 "progress": { 00:29:17.498 "blocks": 12288, 00:29:17.498 "percent": 18 00:29:17.498 } 00:29:17.498 }, 00:29:17.498 "base_bdevs_list": [ 00:29:17.498 { 00:29:17.498 "name": "spare", 00:29:17.498 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:17.498 "is_configured": true, 00:29:17.498 "data_offset": 0, 00:29:17.498 "data_size": 65536 00:29:17.498 }, 00:29:17.498 { 00:29:17.498 "name": "BaseBdev2", 00:29:17.498 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:17.498 "is_configured": true, 00:29:17.498 "data_offset": 0, 00:29:17.498 "data_size": 65536 00:29:17.498 } 00:29:17.498 ] 00:29:17.498 }' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.498 [2024-11-26 17:25:47.477183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:17.498 "name": "raid_bdev1", 00:29:17.498 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:17.498 "strip_size_kb": 0, 00:29:17.498 "state": "online", 00:29:17.498 "raid_level": "raid1", 00:29:17.498 "superblock": false, 00:29:17.498 "num_base_bdevs": 2, 00:29:17.498 "num_base_bdevs_discovered": 2, 00:29:17.498 "num_base_bdevs_operational": 2, 00:29:17.498 "process": { 00:29:17.498 "type": "rebuild", 00:29:17.498 "target": "spare", 00:29:17.498 "progress": { 00:29:17.498 "blocks": 12288, 00:29:17.498 "percent": 18 00:29:17.498 } 00:29:17.498 }, 00:29:17.498 "base_bdevs_list": [ 00:29:17.498 { 00:29:17.498 "name": "spare", 00:29:17.498 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:17.498 "is_configured": true, 00:29:17.498 "data_offset": 0, 00:29:17.498 "data_size": 65536 00:29:17.498 }, 00:29:17.498 { 00:29:17.498 "name": "BaseBdev2", 00:29:17.498 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:17.498 "is_configured": true, 00:29:17.498 "data_offset": 0, 00:29:17.498 "data_size": 65536 00:29:17.498 } 00:29:17.498 ] 00:29:17.498 }' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.498 17:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:17.760 [2024-11-26 17:25:47.693953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:18.021 [2024-11-26 17:25:47.917031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:18.280 124.75 IOPS, 374.25 MiB/s [2024-11-26T17:25:48.394Z] [2024-11-26 17:25:48.135124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:18.540 17:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:18.800 "name": "raid_bdev1", 00:29:18.800 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:18.800 "strip_size_kb": 0, 00:29:18.800 "state": "online", 00:29:18.800 "raid_level": "raid1", 00:29:18.800 "superblock": false, 00:29:18.800 "num_base_bdevs": 2, 00:29:18.800 "num_base_bdevs_discovered": 2, 00:29:18.800 "num_base_bdevs_operational": 2, 00:29:18.800 "process": { 00:29:18.800 "type": "rebuild", 00:29:18.800 "target": "spare", 00:29:18.800 "progress": { 00:29:18.800 "blocks": 28672, 00:29:18.800 "percent": 43 00:29:18.800 } 00:29:18.800 }, 00:29:18.800 "base_bdevs_list": [ 00:29:18.800 { 00:29:18.800 "name": "spare", 00:29:18.800 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:18.800 "is_configured": true, 00:29:18.800 "data_offset": 0, 00:29:18.800 "data_size": 65536 00:29:18.800 }, 00:29:18.800 { 00:29:18.800 "name": "BaseBdev2", 00:29:18.800 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:18.800 "is_configured": true, 00:29:18.800 "data_offset": 0, 00:29:18.800 "data_size": 65536 00:29:18.800 } 00:29:18.800 ] 00:29:18.800 }' 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:18.800 17:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:19.319 108.20 IOPS, 324.60 MiB/s [2024-11-26T17:25:49.433Z] [2024-11-26 17:25:49.196674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:19.319 [2024-11-26 17:25:49.203625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:19.886 "name": "raid_bdev1", 00:29:19.886 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:19.886 "strip_size_kb": 0, 00:29:19.886 "state": "online", 00:29:19.886 "raid_level": "raid1", 00:29:19.886 "superblock": false, 00:29:19.886 "num_base_bdevs": 2, 00:29:19.886 "num_base_bdevs_discovered": 2, 00:29:19.886 "num_base_bdevs_operational": 2, 00:29:19.886 "process": { 00:29:19.886 "type": "rebuild", 00:29:19.886 "target": "spare", 00:29:19.886 "progress": { 00:29:19.886 "blocks": 45056, 00:29:19.886 "percent": 68 00:29:19.886 } 00:29:19.886 }, 00:29:19.886 "base_bdevs_list": [ 00:29:19.886 { 00:29:19.886 "name": "spare", 00:29:19.886 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:19.886 "is_configured": true, 00:29:19.886 "data_offset": 0, 00:29:19.886 "data_size": 65536 00:29:19.886 }, 00:29:19.886 { 00:29:19.886 "name": "BaseBdev2", 00:29:19.886 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:19.886 "is_configured": true, 00:29:19.886 "data_offset": 0, 00:29:19.886 "data_size": 65536 00:29:19.886 } 00:29:19.886 ] 00:29:19.886 }' 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:19.886 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:19.887 17:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:20.147 98.17 IOPS, 294.50 MiB/s [2024-11-26T17:25:50.261Z] [2024-11-26 17:25:50.114794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:20.721 [2024-11-26 17:25:50.546214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:29:20.981 [2024-11-26 17:25:50.873627] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:20.981 "name": "raid_bdev1", 00:29:20.981 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:20.981 "strip_size_kb": 0, 00:29:20.981 "state": "online", 00:29:20.981 "raid_level": "raid1", 00:29:20.981 "superblock": false, 00:29:20.981 "num_base_bdevs": 2, 00:29:20.981 "num_base_bdevs_discovered": 2, 00:29:20.981 "num_base_bdevs_operational": 2, 00:29:20.981 "process": { 00:29:20.981 "type": "rebuild", 00:29:20.981 "target": "spare", 00:29:20.981 "progress": { 00:29:20.981 "blocks": 65536, 00:29:20.981 "percent": 100 00:29:20.981 } 00:29:20.981 }, 00:29:20.981 "base_bdevs_list": [ 00:29:20.981 { 00:29:20.981 "name": "spare", 00:29:20.981 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:20.981 "is_configured": true, 00:29:20.981 "data_offset": 0, 00:29:20.981 "data_size": 65536 00:29:20.981 }, 00:29:20.981 { 00:29:20.981 "name": "BaseBdev2", 00:29:20.981 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:20.981 "is_configured": true, 00:29:20.981 "data_offset": 0, 00:29:20.981 "data_size": 65536 00:29:20.981 } 00:29:20.981 ] 00:29:20.981 }' 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:20.981 [2024-11-26 17:25:50.979625] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:20.981 [2024-11-26 17:25:50.982365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.981 17:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:20.981 17:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.981 17:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:22.361 90.00 IOPS, 270.00 MiB/s [2024-11-26T17:25:52.475Z] 83.25 IOPS, 249.75 MiB/s [2024-11-26T17:25:52.475Z] 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:22.361 "name": "raid_bdev1", 00:29:22.361 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:22.361 "strip_size_kb": 0, 00:29:22.361 "state": "online", 00:29:22.361 "raid_level": "raid1", 00:29:22.361 "superblock": false, 00:29:22.361 "num_base_bdevs": 2, 00:29:22.361 "num_base_bdevs_discovered": 2, 00:29:22.361 "num_base_bdevs_operational": 2, 00:29:22.361 "base_bdevs_list": [ 00:29:22.361 { 00:29:22.361 "name": "spare", 00:29:22.361 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 }, 00:29:22.361 { 00:29:22.361 "name": "BaseBdev2", 00:29:22.361 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 } 00:29:22.361 ] 00:29:22.361 }' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:22.361 "name": "raid_bdev1", 00:29:22.361 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:22.361 "strip_size_kb": 0, 00:29:22.361 "state": "online", 00:29:22.361 "raid_level": "raid1", 00:29:22.361 "superblock": false, 00:29:22.361 "num_base_bdevs": 2, 00:29:22.361 "num_base_bdevs_discovered": 2, 00:29:22.361 "num_base_bdevs_operational": 2, 00:29:22.361 "base_bdevs_list": [ 00:29:22.361 { 00:29:22.361 "name": "spare", 00:29:22.361 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 }, 00:29:22.361 { 00:29:22.361 "name": "BaseBdev2", 00:29:22.361 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 } 00:29:22.361 ] 00:29:22.361 }' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.361 "name": "raid_bdev1", 00:29:22.361 "uuid": "672c618f-d7c4-4d99-b84b-7dc5e205713f", 00:29:22.361 "strip_size_kb": 0, 00:29:22.361 "state": "online", 00:29:22.361 "raid_level": "raid1", 00:29:22.361 "superblock": false, 00:29:22.361 "num_base_bdevs": 2, 00:29:22.361 "num_base_bdevs_discovered": 2, 00:29:22.361 "num_base_bdevs_operational": 2, 00:29:22.361 "base_bdevs_list": [ 00:29:22.361 { 00:29:22.361 "name": "spare", 00:29:22.361 "uuid": "474306a6-2bad-54a5-a985-1e2314fa0d29", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 }, 00:29:22.361 { 00:29:22.361 "name": "BaseBdev2", 00:29:22.361 "uuid": "959672e3-f48f-5844-baa8-f80e312db27a", 00:29:22.361 "is_configured": true, 00:29:22.361 "data_offset": 0, 00:29:22.361 "data_size": 65536 00:29:22.361 } 00:29:22.361 ] 00:29:22.361 }' 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.361 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.653 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:22.653 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.653 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.653 [2024-11-26 17:25:52.714502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:22.653 [2024-11-26 17:25:52.714717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:22.912 00:29:22.912 Latency(us) 00:29:22.912 [2024-11-26T17:25:53.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.912 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:22.912 raid_bdev1 : 8.79 79.53 238.58 0.00 0.00 18827.54 305.97 115385.47 00:29:22.912 [2024-11-26T17:25:53.026Z] =================================================================================================================== 00:29:22.912 [2024-11-26T17:25:53.026Z] Total : 79.53 238.58 0.00 0.00 18827.54 305.97 115385.47 00:29:22.912 [2024-11-26 17:25:52.837505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.912 [2024-11-26 17:25:52.837591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:22.912 [2024-11-26 17:25:52.837696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:22.912 [2024-11-26 17:25:52.837713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:22.912 { 00:29:22.912 "results": [ 00:29:22.912 { 00:29:22.912 "job": "raid_bdev1", 00:29:22.912 "core_mask": "0x1", 00:29:22.912 "workload": "randrw", 00:29:22.912 "percentage": 50, 00:29:22.912 "status": "finished", 00:29:22.912 "queue_depth": 2, 00:29:22.912 "io_size": 3145728, 00:29:22.912 "runtime": 8.789334, 00:29:22.912 "iops": 79.52820998724135, 00:29:22.912 "mibps": 238.58462996172406, 00:29:22.912 "io_failed": 0, 00:29:22.912 "io_timeout": 0, 00:29:22.912 "avg_latency_us": 18827.54354068635, 00:29:22.912 "min_latency_us": 305.96626506024097, 00:29:22.912 "max_latency_us": 115385.47148594378 00:29:22.912 } 00:29:22.912 ], 00:29:22.912 "core_count": 1 00:29:22.912 } 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:22.912 17:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:23.171 /dev/nbd0 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.171 1+0 records in 00:29:23.171 1+0 records out 00:29:23.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480922 s, 8.5 MB/s 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.171 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:23.430 /dev/nbd1 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.430 1+0 records in 00:29:23.430 1+0 records out 00:29:23.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537369 s, 7.6 MB/s 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.430 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.690 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.950 17:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76586 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76586 ']' 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76586 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76586 00:29:24.210 killing process with pid 76586 00:29:24.210 Received shutdown signal, test time was about 10.185186 seconds 00:29:24.210 00:29:24.210 Latency(us) 00:29:24.210 [2024-11-26T17:25:54.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.210 [2024-11-26T17:25:54.324Z] =================================================================================================================== 00:29:24.210 [2024-11-26T17:25:54.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76586' 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76586 00:29:24.210 [2024-11-26 17:25:54.207890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.210 17:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76586 00:29:24.469 [2024-11-26 17:25:54.453435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:25.874 00:29:25.874 real 0m13.471s 00:29:25.874 user 0m16.462s 00:29:25.874 sys 0m1.830s 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.874 ************************************ 00:29:25.874 END TEST raid_rebuild_test_io 00:29:25.874 ************************************ 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.874 17:25:55 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:29:25.874 17:25:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:25.874 17:25:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.874 17:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:25.874 ************************************ 00:29:25.874 START TEST raid_rebuild_test_sb_io 00:29:25.874 ************************************ 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76982 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76982 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76982 ']' 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.874 17:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.874 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:25.874 Zero copy mechanism will not be used. 00:29:25.874 [2024-11-26 17:25:55.903282] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:29:25.874 [2024-11-26 17:25:55.903426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76982 ] 00:29:26.133 [2024-11-26 17:25:56.077779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.133 [2024-11-26 17:25:56.221285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.393 [2024-11-26 17:25:56.445972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:26.393 [2024-11-26 17:25:56.446228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.664 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 BaseBdev1_malloc 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 [2024-11-26 17:25:56.814848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:26.924 [2024-11-26 17:25:56.814924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.924 [2024-11-26 17:25:56.814953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:26.924 [2024-11-26 17:25:56.814970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.924 [2024-11-26 17:25:56.817750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.924 [2024-11-26 17:25:56.817941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:26.924 BaseBdev1 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 BaseBdev2_malloc 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 [2024-11-26 17:25:56.873672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:26.924 [2024-11-26 17:25:56.873895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.924 [2024-11-26 17:25:56.873936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:26.924 [2024-11-26 17:25:56.873954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.924 [2024-11-26 17:25:56.876963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.924 [2024-11-26 17:25:56.877133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:26.924 BaseBdev2 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 spare_malloc 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 spare_delay 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 [2024-11-26 17:25:56.973230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:26.924 [2024-11-26 17:25:56.973327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.924 [2024-11-26 17:25:56.973361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:26.924 [2024-11-26 17:25:56.973378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.924 [2024-11-26 17:25:56.976279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.924 [2024-11-26 17:25:56.976466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:26.924 spare 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 [2024-11-26 17:25:56.985450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:26.924 [2024-11-26 17:25:56.987891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:26.924 [2024-11-26 17:25:56.988097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:26.924 [2024-11-26 17:25:56.988116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:26.924 [2024-11-26 17:25:56.988402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:26.924 [2024-11-26 17:25:56.988615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:26.924 [2024-11-26 17:25:56.988627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:26.924 [2024-11-26 17:25:56.988819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.924 17:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.924 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.183 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.183 "name": "raid_bdev1", 00:29:27.183 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:27.183 "strip_size_kb": 0, 00:29:27.183 "state": "online", 00:29:27.183 "raid_level": "raid1", 00:29:27.183 "superblock": true, 00:29:27.183 "num_base_bdevs": 2, 00:29:27.183 "num_base_bdevs_discovered": 2, 00:29:27.183 "num_base_bdevs_operational": 2, 00:29:27.183 "base_bdevs_list": [ 00:29:27.183 { 00:29:27.183 "name": "BaseBdev1", 00:29:27.183 "uuid": "efc402fc-8337-5dd6-be17-90ac141a1df3", 00:29:27.183 "is_configured": true, 00:29:27.183 "data_offset": 2048, 00:29:27.183 "data_size": 63488 00:29:27.183 }, 00:29:27.183 { 00:29:27.183 "name": "BaseBdev2", 00:29:27.183 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:27.183 "is_configured": true, 00:29:27.183 "data_offset": 2048, 00:29:27.183 "data_size": 63488 00:29:27.183 } 00:29:27.183 ] 00:29:27.183 }' 00:29:27.183 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.183 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 [2024-11-26 17:25:57.449098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 [2024-11-26 17:25:57.536720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.719 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.719 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.719 "name": "raid_bdev1", 00:29:27.719 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:27.719 "strip_size_kb": 0, 00:29:27.719 "state": "online", 00:29:27.719 "raid_level": "raid1", 00:29:27.719 "superblock": true, 00:29:27.719 "num_base_bdevs": 2, 00:29:27.719 "num_base_bdevs_discovered": 1, 00:29:27.719 "num_base_bdevs_operational": 1, 00:29:27.719 "base_bdevs_list": [ 00:29:27.719 { 00:29:27.719 "name": null, 00:29:27.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.719 "is_configured": false, 00:29:27.719 "data_offset": 0, 00:29:27.719 "data_size": 63488 00:29:27.719 }, 00:29:27.719 { 00:29:27.719 "name": "BaseBdev2", 00:29:27.719 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:27.719 "is_configured": true, 00:29:27.719 "data_offset": 2048, 00:29:27.719 "data_size": 63488 00:29:27.719 } 00:29:27.719 ] 00:29:27.719 }' 00:29:27.719 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.719 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.719 [2024-11-26 17:25:57.666452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:27.719 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:27.719 Zero copy mechanism will not be used. 00:29:27.719 Running I/O for 60 seconds... 00:29:27.978 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:27.978 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.978 17:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.978 [2024-11-26 17:25:57.995962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:27.978 17:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.978 17:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:27.978 [2024-11-26 17:25:58.050241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:27.978 [2024-11-26 17:25:58.053109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:28.238 [2024-11-26 17:25:58.170886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:28.238 [2024-11-26 17:25:58.171782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:28.498 [2024-11-26 17:25:58.388104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:28.498 [2024-11-26 17:25:58.388779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:28.757 134.00 IOPS, 402.00 MiB/s [2024-11-26T17:25:58.871Z] [2024-11-26 17:25:58.722135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.016 [2024-11-26 17:25:59.077012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:29.016 [2024-11-26 17:25:59.077936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:29.016 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:29.016 "name": "raid_bdev1", 00:29:29.016 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:29.016 "strip_size_kb": 0, 00:29:29.016 "state": "online", 00:29:29.016 "raid_level": "raid1", 00:29:29.016 "superblock": true, 00:29:29.016 "num_base_bdevs": 2, 00:29:29.016 "num_base_bdevs_discovered": 2, 00:29:29.016 "num_base_bdevs_operational": 2, 00:29:29.016 "process": { 00:29:29.016 "type": "rebuild", 00:29:29.016 "target": "spare", 00:29:29.016 "progress": { 00:29:29.016 "blocks": 12288, 00:29:29.016 "percent": 19 00:29:29.016 } 00:29:29.016 }, 00:29:29.016 "base_bdevs_list": [ 00:29:29.016 { 00:29:29.016 "name": "spare", 00:29:29.016 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:29.016 "is_configured": true, 00:29:29.016 "data_offset": 2048, 00:29:29.016 "data_size": 63488 00:29:29.016 }, 00:29:29.017 { 00:29:29.017 "name": "BaseBdev2", 00:29:29.017 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:29.017 "is_configured": true, 00:29:29.017 "data_offset": 2048, 00:29:29.017 "data_size": 63488 00:29:29.017 } 00:29:29.017 ] 00:29:29.017 }' 00:29:29.017 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.274 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.274 [2024-11-26 17:25:59.190076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:29.274 [2024-11-26 17:25:59.286636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:29.274 [2024-11-26 17:25:59.287030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:29.533 [2024-11-26 17:25:59.395753] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:29.533 [2024-11-26 17:25:59.410883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.533 [2024-11-26 17:25:59.410957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:29.533 [2024-11-26 17:25:59.410976] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:29.533 [2024-11-26 17:25:59.451693] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.533 "name": "raid_bdev1", 00:29:29.533 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:29.533 "strip_size_kb": 0, 00:29:29.533 "state": "online", 00:29:29.533 "raid_level": "raid1", 00:29:29.533 "superblock": true, 00:29:29.533 "num_base_bdevs": 2, 00:29:29.533 "num_base_bdevs_discovered": 1, 00:29:29.533 "num_base_bdevs_operational": 1, 00:29:29.533 "base_bdevs_list": [ 00:29:29.533 { 00:29:29.533 "name": null, 00:29:29.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.533 "is_configured": false, 00:29:29.533 "data_offset": 0, 00:29:29.533 "data_size": 63488 00:29:29.533 }, 00:29:29.533 { 00:29:29.533 "name": "BaseBdev2", 00:29:29.533 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:29.533 "is_configured": true, 00:29:29.533 "data_offset": 2048, 00:29:29.533 "data_size": 63488 00:29:29.533 } 00:29:29.533 ] 00:29:29.533 }' 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.533 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.793 127.00 IOPS, 381.00 MiB/s [2024-11-26T17:25:59.907Z] 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.793 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.053 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.053 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:30.053 "name": "raid_bdev1", 00:29:30.053 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:30.053 "strip_size_kb": 0, 00:29:30.053 "state": "online", 00:29:30.053 "raid_level": "raid1", 00:29:30.053 "superblock": true, 00:29:30.053 "num_base_bdevs": 2, 00:29:30.053 "num_base_bdevs_discovered": 1, 00:29:30.053 "num_base_bdevs_operational": 1, 00:29:30.053 "base_bdevs_list": [ 00:29:30.053 { 00:29:30.053 "name": null, 00:29:30.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:30.053 "is_configured": false, 00:29:30.053 "data_offset": 0, 00:29:30.053 "data_size": 63488 00:29:30.053 }, 00:29:30.053 { 00:29:30.053 "name": "BaseBdev2", 00:29:30.053 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:30.053 "is_configured": true, 00:29:30.053 "data_offset": 2048, 00:29:30.053 "data_size": 63488 00:29:30.053 } 00:29:30.053 ] 00:29:30.053 }' 00:29:30.053 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:30.053 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:30.053 17:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:30.053 [2024-11-26 17:26:00.022327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.053 17:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:30.053 [2024-11-26 17:26:00.102111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:30.053 [2024-11-26 17:26:00.104696] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:30.313 [2024-11-26 17:26:00.211975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:30.313 [2024-11-26 17:26:00.212608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:30.612 [2024-11-26 17:26:00.428563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:30.612 [2024-11-26 17:26:00.428911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:30.870 137.00 IOPS, 411.00 MiB/s [2024-11-26T17:26:00.984Z] [2024-11-26 17:26:00.897000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:31.128 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:31.129 "name": "raid_bdev1", 00:29:31.129 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:31.129 "strip_size_kb": 0, 00:29:31.129 "state": "online", 00:29:31.129 "raid_level": "raid1", 00:29:31.129 "superblock": true, 00:29:31.129 "num_base_bdevs": 2, 00:29:31.129 "num_base_bdevs_discovered": 2, 00:29:31.129 "num_base_bdevs_operational": 2, 00:29:31.129 "process": { 00:29:31.129 "type": "rebuild", 00:29:31.129 "target": "spare", 00:29:31.129 "progress": { 00:29:31.129 "blocks": 10240, 00:29:31.129 "percent": 16 00:29:31.129 } 00:29:31.129 }, 00:29:31.129 "base_bdevs_list": [ 00:29:31.129 { 00:29:31.129 "name": "spare", 00:29:31.129 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:31.129 "is_configured": true, 00:29:31.129 "data_offset": 2048, 00:29:31.129 "data_size": 63488 00:29:31.129 }, 00:29:31.129 { 00:29:31.129 "name": "BaseBdev2", 00:29:31.129 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:31.129 "is_configured": true, 00:29:31.129 "data_offset": 2048, 00:29:31.129 "data_size": 63488 00:29:31.129 } 00:29:31.129 ] 00:29:31.129 }' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:31.129 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:29:31.129 [2024-11-26 17:26:01.233324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:31.129 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.388 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.388 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.388 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.388 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.388 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:31.388 "name": "raid_bdev1", 00:29:31.388 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:31.388 "strip_size_kb": 0, 00:29:31.388 "state": "online", 00:29:31.388 "raid_level": "raid1", 00:29:31.388 "superblock": true, 00:29:31.388 "num_base_bdevs": 2, 00:29:31.389 "num_base_bdevs_discovered": 2, 00:29:31.389 "num_base_bdevs_operational": 2, 00:29:31.389 "process": { 00:29:31.389 "type": "rebuild", 00:29:31.389 "target": "spare", 00:29:31.389 "progress": { 00:29:31.389 "blocks": 14336, 00:29:31.389 "percent": 22 00:29:31.389 } 00:29:31.389 }, 00:29:31.389 "base_bdevs_list": [ 00:29:31.389 { 00:29:31.389 "name": "spare", 00:29:31.389 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:31.389 "is_configured": true, 00:29:31.389 "data_offset": 2048, 00:29:31.389 "data_size": 63488 00:29:31.389 }, 00:29:31.389 { 00:29:31.389 "name": "BaseBdev2", 00:29:31.389 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:31.389 "is_configured": true, 00:29:31.389 "data_offset": 2048, 00:29:31.389 "data_size": 63488 00:29:31.389 } 00:29:31.389 ] 00:29:31.389 }' 00:29:31.389 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:31.389 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.389 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:31.389 [2024-11-26 17:26:01.355118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:31.389 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:31.389 17:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:31.647 126.75 IOPS, 380.25 MiB/s [2024-11-26T17:26:01.761Z] [2024-11-26 17:26:01.681572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:32.214 [2024-11-26 17:26:02.035357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:32.214 [2024-11-26 17:26:02.275118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.473 [2024-11-26 17:26:02.412976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.473 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:32.473 "name": "raid_bdev1", 00:29:32.473 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:32.473 "strip_size_kb": 0, 00:29:32.473 "state": "online", 00:29:32.473 "raid_level": "raid1", 00:29:32.473 "superblock": true, 00:29:32.473 "num_base_bdevs": 2, 00:29:32.473 "num_base_bdevs_discovered": 2, 00:29:32.473 "num_base_bdevs_operational": 2, 00:29:32.473 "process": { 00:29:32.473 "type": "rebuild", 00:29:32.473 "target": "spare", 00:29:32.473 "progress": { 00:29:32.473 "blocks": 32768, 00:29:32.473 "percent": 51 00:29:32.473 } 00:29:32.473 }, 00:29:32.473 "base_bdevs_list": [ 00:29:32.473 { 00:29:32.473 "name": "spare", 00:29:32.473 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:32.473 "is_configured": true, 00:29:32.473 "data_offset": 2048, 00:29:32.473 "data_size": 63488 00:29:32.473 }, 00:29:32.473 { 00:29:32.473 "name": "BaseBdev2", 00:29:32.473 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:32.473 "is_configured": true, 00:29:32.473 "data_offset": 2048, 00:29:32.474 "data_size": 63488 00:29:32.474 } 00:29:32.474 ] 00:29:32.474 }' 00:29:32.474 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:32.474 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.474 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:32.474 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.474 17:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:32.733 114.20 IOPS, 342.60 MiB/s [2024-11-26T17:26:02.847Z] [2024-11-26 17:26:02.759773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:33.301 [2024-11-26 17:26:03.121048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:33.560 "name": "raid_bdev1", 00:29:33.560 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:33.560 "strip_size_kb": 0, 00:29:33.560 "state": "online", 00:29:33.560 "raid_level": "raid1", 00:29:33.560 "superblock": true, 00:29:33.560 "num_base_bdevs": 2, 00:29:33.560 "num_base_bdevs_discovered": 2, 00:29:33.560 "num_base_bdevs_operational": 2, 00:29:33.560 "process": { 00:29:33.560 "type": "rebuild", 00:29:33.560 "target": "spare", 00:29:33.560 "progress": { 00:29:33.560 "blocks": 53248, 00:29:33.560 "percent": 83 00:29:33.560 } 00:29:33.560 }, 00:29:33.560 "base_bdevs_list": [ 00:29:33.560 { 00:29:33.560 "name": "spare", 00:29:33.560 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:33.560 "is_configured": true, 00:29:33.560 "data_offset": 2048, 00:29:33.560 "data_size": 63488 00:29:33.560 }, 00:29:33.560 { 00:29:33.560 "name": "BaseBdev2", 00:29:33.560 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:33.560 "is_configured": true, 00:29:33.560 "data_offset": 2048, 00:29:33.560 "data_size": 63488 00:29:33.560 } 00:29:33.560 ] 00:29:33.560 }' 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:33.560 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:33.819 102.00 IOPS, 306.00 MiB/s [2024-11-26T17:26:03.933Z] 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:33.819 17:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:34.078 [2024-11-26 17:26:04.071858] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:34.078 [2024-11-26 17:26:04.177865] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:34.078 [2024-11-26 17:26:04.181978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.647 93.14 IOPS, 279.43 MiB/s [2024-11-26T17:26:04.761Z] 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:34.647 "name": "raid_bdev1", 00:29:34.647 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:34.647 "strip_size_kb": 0, 00:29:34.647 "state": "online", 00:29:34.647 "raid_level": "raid1", 00:29:34.647 "superblock": true, 00:29:34.647 "num_base_bdevs": 2, 00:29:34.647 "num_base_bdevs_discovered": 2, 00:29:34.647 "num_base_bdevs_operational": 2, 00:29:34.647 "base_bdevs_list": [ 00:29:34.647 { 00:29:34.647 "name": "spare", 00:29:34.647 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:34.647 "is_configured": true, 00:29:34.647 "data_offset": 2048, 00:29:34.647 "data_size": 63488 00:29:34.647 }, 00:29:34.647 { 00:29:34.647 "name": "BaseBdev2", 00:29:34.647 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:34.647 "is_configured": true, 00:29:34.647 "data_offset": 2048, 00:29:34.647 "data_size": 63488 00:29:34.647 } 00:29:34.647 ] 00:29:34.647 }' 00:29:34.647 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:34.917 "name": "raid_bdev1", 00:29:34.917 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:34.917 "strip_size_kb": 0, 00:29:34.917 "state": "online", 00:29:34.917 "raid_level": "raid1", 00:29:34.917 "superblock": true, 00:29:34.917 "num_base_bdevs": 2, 00:29:34.917 "num_base_bdevs_discovered": 2, 00:29:34.917 "num_base_bdevs_operational": 2, 00:29:34.917 "base_bdevs_list": [ 00:29:34.917 { 00:29:34.917 "name": "spare", 00:29:34.917 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:34.917 "is_configured": true, 00:29:34.917 "data_offset": 2048, 00:29:34.917 "data_size": 63488 00:29:34.917 }, 00:29:34.917 { 00:29:34.917 "name": "BaseBdev2", 00:29:34.917 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:34.917 "is_configured": true, 00:29:34.917 "data_offset": 2048, 00:29:34.917 "data_size": 63488 00:29:34.917 } 00:29:34.917 ] 00:29:34.917 }' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:34.917 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.918 17:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.918 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.918 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.177 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.177 "name": "raid_bdev1", 00:29:35.177 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:35.177 "strip_size_kb": 0, 00:29:35.177 "state": "online", 00:29:35.177 "raid_level": "raid1", 00:29:35.177 "superblock": true, 00:29:35.177 "num_base_bdevs": 2, 00:29:35.177 "num_base_bdevs_discovered": 2, 00:29:35.177 "num_base_bdevs_operational": 2, 00:29:35.177 "base_bdevs_list": [ 00:29:35.177 { 00:29:35.177 "name": "spare", 00:29:35.177 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:35.177 "is_configured": true, 00:29:35.177 "data_offset": 2048, 00:29:35.177 "data_size": 63488 00:29:35.177 }, 00:29:35.177 { 00:29:35.177 "name": "BaseBdev2", 00:29:35.177 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:35.177 "is_configured": true, 00:29:35.177 "data_offset": 2048, 00:29:35.177 "data_size": 63488 00:29:35.177 } 00:29:35.177 ] 00:29:35.177 }' 00:29:35.177 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.177 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.437 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:35.437 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.437 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.437 [2024-11-26 17:26:05.493048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.437 [2024-11-26 17:26:05.493281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:35.696 00:29:35.696 Latency(us) 00:29:35.696 [2024-11-26T17:26:05.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.696 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:35.696 raid_bdev1 : 7.94 84.80 254.39 0.00 0.00 16151.43 292.81 111174.32 00:29:35.696 [2024-11-26T17:26:05.810Z] =================================================================================================================== 00:29:35.696 [2024-11-26T17:26:05.810Z] Total : 84.80 254.39 0.00 0.00 16151.43 292.81 111174.32 00:29:35.696 [2024-11-26 17:26:05.617025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.696 [2024-11-26 17:26:05.617108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:35.696 [2024-11-26 17:26:05.617224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:35.696 [2024-11-26 17:26:05.617238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:35.696 { 00:29:35.696 "results": [ 00:29:35.696 { 00:29:35.696 "job": "raid_bdev1", 00:29:35.696 "core_mask": "0x1", 00:29:35.696 "workload": "randrw", 00:29:35.696 "percentage": 50, 00:29:35.696 "status": "finished", 00:29:35.696 "queue_depth": 2, 00:29:35.696 "io_size": 3145728, 00:29:35.696 "runtime": 7.93648, 00:29:35.696 "iops": 84.79829849001068, 00:29:35.696 "mibps": 254.39489547003205, 00:29:35.696 "io_failed": 0, 00:29:35.696 "io_timeout": 0, 00:29:35.696 "avg_latency_us": 16151.43454770046, 00:29:35.696 "min_latency_us": 292.8064257028112, 00:29:35.696 "max_latency_us": 111174.32289156626 00:29:35.696 } 00:29:35.696 ], 00:29:35.696 "core_count": 1 00:29:35.696 } 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.696 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:35.956 /dev/nbd0 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.956 1+0 records in 00:29:35.956 1+0 records out 00:29:35.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556295 s, 7.4 MB/s 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.956 17:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:36.216 /dev/nbd1 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.217 1+0 records in 00:29:36.217 1+0 records out 00:29:36.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046865 s, 8.7 MB/s 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:36.217 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.476 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.735 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.994 17:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.994 [2024-11-26 17:26:07.008774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:36.994 [2024-11-26 17:26:07.008859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.994 [2024-11-26 17:26:07.008896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:36.994 [2024-11-26 17:26:07.008909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.994 [2024-11-26 17:26:07.011793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.994 [2024-11-26 17:26:07.011838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:36.994 [2024-11-26 17:26:07.011953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:36.994 [2024-11-26 17:26:07.012009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:36.994 [2024-11-26 17:26:07.012228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:36.994 spare 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.994 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.254 [2024-11-26 17:26:07.112195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:37.254 [2024-11-26 17:26:07.112282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:37.254 [2024-11-26 17:26:07.112716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:29:37.254 [2024-11-26 17:26:07.112967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:37.254 [2024-11-26 17:26:07.112991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:37.254 [2024-11-26 17:26:07.113247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.254 "name": "raid_bdev1", 00:29:37.254 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:37.254 "strip_size_kb": 0, 00:29:37.254 "state": "online", 00:29:37.254 "raid_level": "raid1", 00:29:37.254 "superblock": true, 00:29:37.254 "num_base_bdevs": 2, 00:29:37.254 "num_base_bdevs_discovered": 2, 00:29:37.254 "num_base_bdevs_operational": 2, 00:29:37.254 "base_bdevs_list": [ 00:29:37.254 { 00:29:37.254 "name": "spare", 00:29:37.254 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:37.254 "is_configured": true, 00:29:37.254 "data_offset": 2048, 00:29:37.254 "data_size": 63488 00:29:37.254 }, 00:29:37.254 { 00:29:37.254 "name": "BaseBdev2", 00:29:37.254 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:37.254 "is_configured": true, 00:29:37.254 "data_offset": 2048, 00:29:37.254 "data_size": 63488 00:29:37.254 } 00:29:37.254 ] 00:29:37.254 }' 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.254 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:37.513 "name": "raid_bdev1", 00:29:37.513 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:37.513 "strip_size_kb": 0, 00:29:37.513 "state": "online", 00:29:37.513 "raid_level": "raid1", 00:29:37.513 "superblock": true, 00:29:37.513 "num_base_bdevs": 2, 00:29:37.513 "num_base_bdevs_discovered": 2, 00:29:37.513 "num_base_bdevs_operational": 2, 00:29:37.513 "base_bdevs_list": [ 00:29:37.513 { 00:29:37.513 "name": "spare", 00:29:37.513 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:37.513 "is_configured": true, 00:29:37.513 "data_offset": 2048, 00:29:37.513 "data_size": 63488 00:29:37.513 }, 00:29:37.513 { 00:29:37.513 "name": "BaseBdev2", 00:29:37.513 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:37.513 "is_configured": true, 00:29:37.513 "data_offset": 2048, 00:29:37.513 "data_size": 63488 00:29:37.513 } 00:29:37.513 ] 00:29:37.513 }' 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:37.513 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.773 [2024-11-26 17:26:07.680573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.773 "name": "raid_bdev1", 00:29:37.773 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:37.773 "strip_size_kb": 0, 00:29:37.773 "state": "online", 00:29:37.773 "raid_level": "raid1", 00:29:37.773 "superblock": true, 00:29:37.773 "num_base_bdevs": 2, 00:29:37.773 "num_base_bdevs_discovered": 1, 00:29:37.773 "num_base_bdevs_operational": 1, 00:29:37.773 "base_bdevs_list": [ 00:29:37.773 { 00:29:37.773 "name": null, 00:29:37.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.773 "is_configured": false, 00:29:37.773 "data_offset": 0, 00:29:37.773 "data_size": 63488 00:29:37.773 }, 00:29:37.773 { 00:29:37.773 "name": "BaseBdev2", 00:29:37.773 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:37.773 "is_configured": true, 00:29:37.773 "data_offset": 2048, 00:29:37.773 "data_size": 63488 00:29:37.773 } 00:29:37.773 ] 00:29:37.773 }' 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.773 17:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.032 17:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.032 17:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.032 17:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.032 [2024-11-26 17:26:08.139986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.032 [2024-11-26 17:26:08.140254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:38.032 [2024-11-26 17:26:08.140281] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:38.032 [2024-11-26 17:26:08.140337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.291 [2024-11-26 17:26:08.159080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:29:38.291 17:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.291 17:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:38.291 [2024-11-26 17:26:08.161463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:39.228 "name": "raid_bdev1", 00:29:39.228 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:39.228 "strip_size_kb": 0, 00:29:39.228 "state": "online", 00:29:39.228 "raid_level": "raid1", 00:29:39.228 "superblock": true, 00:29:39.228 "num_base_bdevs": 2, 00:29:39.228 "num_base_bdevs_discovered": 2, 00:29:39.228 "num_base_bdevs_operational": 2, 00:29:39.228 "process": { 00:29:39.228 "type": "rebuild", 00:29:39.228 "target": "spare", 00:29:39.228 "progress": { 00:29:39.228 "blocks": 20480, 00:29:39.228 "percent": 32 00:29:39.228 } 00:29:39.228 }, 00:29:39.228 "base_bdevs_list": [ 00:29:39.228 { 00:29:39.228 "name": "spare", 00:29:39.228 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:39.228 "is_configured": true, 00:29:39.228 "data_offset": 2048, 00:29:39.228 "data_size": 63488 00:29:39.228 }, 00:29:39.228 { 00:29:39.228 "name": "BaseBdev2", 00:29:39.228 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:39.228 "is_configured": true, 00:29:39.228 "data_offset": 2048, 00:29:39.228 "data_size": 63488 00:29:39.228 } 00:29:39.228 ] 00:29:39.228 }' 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.228 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.228 [2024-11-26 17:26:09.316991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.487 [2024-11-26 17:26:09.369625] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:39.487 [2024-11-26 17:26:09.369725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.487 [2024-11-26 17:26:09.369746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.487 [2024-11-26 17:26:09.369759] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:39.487 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.488 "name": "raid_bdev1", 00:29:39.488 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:39.488 "strip_size_kb": 0, 00:29:39.488 "state": "online", 00:29:39.488 "raid_level": "raid1", 00:29:39.488 "superblock": true, 00:29:39.488 "num_base_bdevs": 2, 00:29:39.488 "num_base_bdevs_discovered": 1, 00:29:39.488 "num_base_bdevs_operational": 1, 00:29:39.488 "base_bdevs_list": [ 00:29:39.488 { 00:29:39.488 "name": null, 00:29:39.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.488 "is_configured": false, 00:29:39.488 "data_offset": 0, 00:29:39.488 "data_size": 63488 00:29:39.488 }, 00:29:39.488 { 00:29:39.488 "name": "BaseBdev2", 00:29:39.488 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:39.488 "is_configured": true, 00:29:39.488 "data_offset": 2048, 00:29:39.488 "data_size": 63488 00:29:39.488 } 00:29:39.488 ] 00:29:39.488 }' 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.488 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.747 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:39.747 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.747 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.747 [2024-11-26 17:26:09.852052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:39.747 [2024-11-26 17:26:09.852147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.747 [2024-11-26 17:26:09.852180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:39.747 [2024-11-26 17:26:09.852201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.747 [2024-11-26 17:26:09.852856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.747 [2024-11-26 17:26:09.852900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:39.747 [2024-11-26 17:26:09.853034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:39.747 [2024-11-26 17:26:09.853061] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:39.747 [2024-11-26 17:26:09.853077] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:39.747 [2024-11-26 17:26:09.853110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.006 [2024-11-26 17:26:09.871763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:29:40.006 spare 00:29:40.006 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.006 17:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:40.006 [2024-11-26 17:26:09.874098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:40.944 "name": "raid_bdev1", 00:29:40.944 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:40.944 "strip_size_kb": 0, 00:29:40.944 "state": "online", 00:29:40.944 "raid_level": "raid1", 00:29:40.944 "superblock": true, 00:29:40.944 "num_base_bdevs": 2, 00:29:40.944 "num_base_bdevs_discovered": 2, 00:29:40.944 "num_base_bdevs_operational": 2, 00:29:40.944 "process": { 00:29:40.944 "type": "rebuild", 00:29:40.944 "target": "spare", 00:29:40.944 "progress": { 00:29:40.944 "blocks": 20480, 00:29:40.944 "percent": 32 00:29:40.944 } 00:29:40.944 }, 00:29:40.944 "base_bdevs_list": [ 00:29:40.944 { 00:29:40.944 "name": "spare", 00:29:40.944 "uuid": "8a5aa6fe-8302-5965-9628-3975a7324f37", 00:29:40.944 "is_configured": true, 00:29:40.944 "data_offset": 2048, 00:29:40.944 "data_size": 63488 00:29:40.944 }, 00:29:40.944 { 00:29:40.944 "name": "BaseBdev2", 00:29:40.944 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:40.944 "is_configured": true, 00:29:40.944 "data_offset": 2048, 00:29:40.944 "data_size": 63488 00:29:40.944 } 00:29:40.944 ] 00:29:40.944 }' 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.944 17:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:40.944 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.944 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:40.944 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.944 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.944 [2024-11-26 17:26:11.034437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:41.203 [2024-11-26 17:26:11.082274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:41.203 [2024-11-26 17:26:11.082372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.203 [2024-11-26 17:26:11.082394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:41.203 [2024-11-26 17:26:11.082404] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.203 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:41.203 "name": "raid_bdev1", 00:29:41.203 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:41.203 "strip_size_kb": 0, 00:29:41.203 "state": "online", 00:29:41.203 "raid_level": "raid1", 00:29:41.203 "superblock": true, 00:29:41.203 "num_base_bdevs": 2, 00:29:41.203 "num_base_bdevs_discovered": 1, 00:29:41.203 "num_base_bdevs_operational": 1, 00:29:41.203 "base_bdevs_list": [ 00:29:41.203 { 00:29:41.204 "name": null, 00:29:41.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.204 "is_configured": false, 00:29:41.204 "data_offset": 0, 00:29:41.204 "data_size": 63488 00:29:41.204 }, 00:29:41.204 { 00:29:41.204 "name": "BaseBdev2", 00:29:41.204 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:41.204 "is_configured": true, 00:29:41.204 "data_offset": 2048, 00:29:41.204 "data_size": 63488 00:29:41.204 } 00:29:41.204 ] 00:29:41.204 }' 00:29:41.204 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:41.204 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:41.772 "name": "raid_bdev1", 00:29:41.772 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:41.772 "strip_size_kb": 0, 00:29:41.772 "state": "online", 00:29:41.772 "raid_level": "raid1", 00:29:41.772 "superblock": true, 00:29:41.772 "num_base_bdevs": 2, 00:29:41.772 "num_base_bdevs_discovered": 1, 00:29:41.772 "num_base_bdevs_operational": 1, 00:29:41.772 "base_bdevs_list": [ 00:29:41.772 { 00:29:41.772 "name": null, 00:29:41.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.772 "is_configured": false, 00:29:41.772 "data_offset": 0, 00:29:41.772 "data_size": 63488 00:29:41.772 }, 00:29:41.772 { 00:29:41.772 "name": "BaseBdev2", 00:29:41.772 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:41.772 "is_configured": true, 00:29:41.772 "data_offset": 2048, 00:29:41.772 "data_size": 63488 00:29:41.772 } 00:29:41.772 ] 00:29:41.772 }' 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.772 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.773 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:41.773 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.773 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.773 [2024-11-26 17:26:11.736143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:41.773 [2024-11-26 17:26:11.736212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.773 [2024-11-26 17:26:11.736255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:41.773 [2024-11-26 17:26:11.736270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.773 [2024-11-26 17:26:11.736819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.773 [2024-11-26 17:26:11.736850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:41.773 [2024-11-26 17:26:11.736962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:41.773 [2024-11-26 17:26:11.736979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:41.773 [2024-11-26 17:26:11.736994] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:41.773 [2024-11-26 17:26:11.737007] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:41.773 BaseBdev1 00:29:41.773 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.773 17:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:42.709 "name": "raid_bdev1", 00:29:42.709 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:42.709 "strip_size_kb": 0, 00:29:42.709 "state": "online", 00:29:42.709 "raid_level": "raid1", 00:29:42.709 "superblock": true, 00:29:42.709 "num_base_bdevs": 2, 00:29:42.709 "num_base_bdevs_discovered": 1, 00:29:42.709 "num_base_bdevs_operational": 1, 00:29:42.709 "base_bdevs_list": [ 00:29:42.709 { 00:29:42.709 "name": null, 00:29:42.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.709 "is_configured": false, 00:29:42.709 "data_offset": 0, 00:29:42.709 "data_size": 63488 00:29:42.709 }, 00:29:42.709 { 00:29:42.709 "name": "BaseBdev2", 00:29:42.709 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:42.709 "is_configured": true, 00:29:42.709 "data_offset": 2048, 00:29:42.709 "data_size": 63488 00:29:42.709 } 00:29:42.709 ] 00:29:42.709 }' 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:42.709 17:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:43.275 "name": "raid_bdev1", 00:29:43.275 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:43.275 "strip_size_kb": 0, 00:29:43.275 "state": "online", 00:29:43.275 "raid_level": "raid1", 00:29:43.275 "superblock": true, 00:29:43.275 "num_base_bdevs": 2, 00:29:43.275 "num_base_bdevs_discovered": 1, 00:29:43.275 "num_base_bdevs_operational": 1, 00:29:43.275 "base_bdevs_list": [ 00:29:43.275 { 00:29:43.275 "name": null, 00:29:43.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.275 "is_configured": false, 00:29:43.275 "data_offset": 0, 00:29:43.275 "data_size": 63488 00:29:43.275 }, 00:29:43.275 { 00:29:43.275 "name": "BaseBdev2", 00:29:43.275 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:43.275 "is_configured": true, 00:29:43.275 "data_offset": 2048, 00:29:43.275 "data_size": 63488 00:29:43.275 } 00:29:43.275 ] 00:29:43.275 }' 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.275 [2024-11-26 17:26:13.342272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:43.275 [2024-11-26 17:26:13.342488] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:43.275 [2024-11-26 17:26:13.342509] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:43.275 request: 00:29:43.275 { 00:29:43.275 "base_bdev": "BaseBdev1", 00:29:43.275 "raid_bdev": "raid_bdev1", 00:29:43.275 "method": "bdev_raid_add_base_bdev", 00:29:43.275 "req_id": 1 00:29:43.275 } 00:29:43.275 Got JSON-RPC error response 00:29:43.275 response: 00:29:43.275 { 00:29:43.275 "code": -22, 00:29:43.275 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:43.275 } 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:43.275 17:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:44.650 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:44.650 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:44.650 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:44.650 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:44.651 "name": "raid_bdev1", 00:29:44.651 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:44.651 "strip_size_kb": 0, 00:29:44.651 "state": "online", 00:29:44.651 "raid_level": "raid1", 00:29:44.651 "superblock": true, 00:29:44.651 "num_base_bdevs": 2, 00:29:44.651 "num_base_bdevs_discovered": 1, 00:29:44.651 "num_base_bdevs_operational": 1, 00:29:44.651 "base_bdevs_list": [ 00:29:44.651 { 00:29:44.651 "name": null, 00:29:44.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.651 "is_configured": false, 00:29:44.651 "data_offset": 0, 00:29:44.651 "data_size": 63488 00:29:44.651 }, 00:29:44.651 { 00:29:44.651 "name": "BaseBdev2", 00:29:44.651 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:44.651 "is_configured": true, 00:29:44.651 "data_offset": 2048, 00:29:44.651 "data_size": 63488 00:29:44.651 } 00:29:44.651 ] 00:29:44.651 }' 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:44.651 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:44.910 "name": "raid_bdev1", 00:29:44.910 "uuid": "4477459e-775a-4cec-b7f8-f84937835862", 00:29:44.910 "strip_size_kb": 0, 00:29:44.910 "state": "online", 00:29:44.910 "raid_level": "raid1", 00:29:44.910 "superblock": true, 00:29:44.910 "num_base_bdevs": 2, 00:29:44.910 "num_base_bdevs_discovered": 1, 00:29:44.910 "num_base_bdevs_operational": 1, 00:29:44.910 "base_bdevs_list": [ 00:29:44.910 { 00:29:44.910 "name": null, 00:29:44.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.910 "is_configured": false, 00:29:44.910 "data_offset": 0, 00:29:44.910 "data_size": 63488 00:29:44.910 }, 00:29:44.910 { 00:29:44.910 "name": "BaseBdev2", 00:29:44.910 "uuid": "e78bd1e4-6029-5c6f-91a4-21c6f3317cc0", 00:29:44.910 "is_configured": true, 00:29:44.910 "data_offset": 2048, 00:29:44.910 "data_size": 63488 00:29:44.910 } 00:29:44.910 ] 00:29:44.910 }' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76982 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76982 ']' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76982 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76982 00:29:44.910 killing process with pid 76982 00:29:44.910 Received shutdown signal, test time was about 17.333492 seconds 00:29:44.910 00:29:44.910 Latency(us) 00:29:44.910 [2024-11-26T17:26:15.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.910 [2024-11-26T17:26:15.024Z] =================================================================================================================== 00:29:44.910 [2024-11-26T17:26:15.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76982' 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76982 00:29:44.910 [2024-11-26 17:26:14.974598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:44.910 17:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76982 00:29:44.910 [2024-11-26 17:26:14.974769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:44.910 [2024-11-26 17:26:14.974833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:44.910 [2024-11-26 17:26:14.974850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:45.169 [2024-11-26 17:26:15.218139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:46.549 00:29:46.549 real 0m20.692s 00:29:46.549 user 0m26.870s 00:29:46.549 sys 0m2.694s 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.549 ************************************ 00:29:46.549 END TEST raid_rebuild_test_sb_io 00:29:46.549 ************************************ 00:29:46.549 17:26:16 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:29:46.549 17:26:16 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:46.549 17:26:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:46.549 17:26:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.549 17:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:46.549 ************************************ 00:29:46.549 START TEST raid_rebuild_test 00:29:46.549 ************************************ 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77673 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77673 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77673 ']' 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.549 17:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.807 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:46.807 Zero copy mechanism will not be used. 00:29:46.807 [2024-11-26 17:26:16.675992] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:29:46.807 [2024-11-26 17:26:16.676140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77673 ] 00:29:46.807 [2024-11-26 17:26:16.861605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.065 [2024-11-26 17:26:17.008996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.324 [2024-11-26 17:26:17.239658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:47.324 [2024-11-26 17:26:17.239738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.583 BaseBdev1_malloc 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.583 [2024-11-26 17:26:17.582235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:47.583 [2024-11-26 17:26:17.582313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.583 [2024-11-26 17:26:17.582341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:47.583 [2024-11-26 17:26:17.582358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.583 [2024-11-26 17:26:17.585002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.583 [2024-11-26 17:26:17.585050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:47.583 BaseBdev1 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.583 BaseBdev2_malloc 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.583 [2024-11-26 17:26:17.642927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:47.583 [2024-11-26 17:26:17.643007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.583 [2024-11-26 17:26:17.643038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:47.583 [2024-11-26 17:26:17.643053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.583 [2024-11-26 17:26:17.645860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.583 [2024-11-26 17:26:17.645905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:47.583 BaseBdev2 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.583 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.842 BaseBdev3_malloc 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.842 [2024-11-26 17:26:17.714404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:47.842 [2024-11-26 17:26:17.714484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.842 [2024-11-26 17:26:17.714509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:47.842 [2024-11-26 17:26:17.714537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.842 [2024-11-26 17:26:17.717039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.842 [2024-11-26 17:26:17.717085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:47.842 BaseBdev3 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.842 BaseBdev4_malloc 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.842 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.842 [2024-11-26 17:26:17.777175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:47.843 [2024-11-26 17:26:17.777276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.843 [2024-11-26 17:26:17.777300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:47.843 [2024-11-26 17:26:17.777315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.843 [2024-11-26 17:26:17.779929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.843 [2024-11-26 17:26:17.779976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:47.843 BaseBdev4 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 spare_malloc 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 spare_delay 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 [2024-11-26 17:26:17.854416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:47.843 [2024-11-26 17:26:17.854482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.843 [2024-11-26 17:26:17.854504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:47.843 [2024-11-26 17:26:17.854530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.843 [2024-11-26 17:26:17.857071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.843 [2024-11-26 17:26:17.857118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:47.843 spare 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 [2024-11-26 17:26:17.866451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:47.843 [2024-11-26 17:26:17.868789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:47.843 [2024-11-26 17:26:17.868858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:47.843 [2024-11-26 17:26:17.868910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:47.843 [2024-11-26 17:26:17.868989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:47.843 [2024-11-26 17:26:17.869013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:47.843 [2024-11-26 17:26:17.869305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:47.843 [2024-11-26 17:26:17.869481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:47.843 [2024-11-26 17:26:17.869501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:47.843 [2024-11-26 17:26:17.869719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:47.843 "name": "raid_bdev1", 00:29:47.843 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:47.843 "strip_size_kb": 0, 00:29:47.843 "state": "online", 00:29:47.843 "raid_level": "raid1", 00:29:47.843 "superblock": false, 00:29:47.843 "num_base_bdevs": 4, 00:29:47.843 "num_base_bdevs_discovered": 4, 00:29:47.843 "num_base_bdevs_operational": 4, 00:29:47.843 "base_bdevs_list": [ 00:29:47.843 { 00:29:47.843 "name": "BaseBdev1", 00:29:47.843 "uuid": "1242b27d-97fb-5a8c-9077-32dfa68d0f2c", 00:29:47.843 "is_configured": true, 00:29:47.843 "data_offset": 0, 00:29:47.843 "data_size": 65536 00:29:47.843 }, 00:29:47.843 { 00:29:47.843 "name": "BaseBdev2", 00:29:47.843 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:47.843 "is_configured": true, 00:29:47.843 "data_offset": 0, 00:29:47.843 "data_size": 65536 00:29:47.843 }, 00:29:47.843 { 00:29:47.843 "name": "BaseBdev3", 00:29:47.843 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:47.843 "is_configured": true, 00:29:47.843 "data_offset": 0, 00:29:47.843 "data_size": 65536 00:29:47.843 }, 00:29:47.843 { 00:29:47.843 "name": "BaseBdev4", 00:29:47.843 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:47.843 "is_configured": true, 00:29:47.843 "data_offset": 0, 00:29:47.843 "data_size": 65536 00:29:47.843 } 00:29:47.843 ] 00:29:47.843 }' 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.843 17:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.410 [2024-11-26 17:26:18.346212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:48.410 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:48.668 [2024-11-26 17:26:18.653827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:48.668 /dev/nbd0 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.668 1+0 records in 00:29:48.668 1+0 records out 00:29:48.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478512 s, 8.6 MB/s 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:48.668 17:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:55.241 65536+0 records in 00:29:55.241 65536+0 records out 00:29:55.241 33554432 bytes (34 MB, 32 MiB) copied, 6.6115 s, 5.1 MB/s 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.241 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:55.501 [2024-11-26 17:26:25.552818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.501 [2024-11-26 17:26:25.592892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.501 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.760 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.760 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.760 "name": "raid_bdev1", 00:29:55.760 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:55.760 "strip_size_kb": 0, 00:29:55.760 "state": "online", 00:29:55.760 "raid_level": "raid1", 00:29:55.760 "superblock": false, 00:29:55.760 "num_base_bdevs": 4, 00:29:55.760 "num_base_bdevs_discovered": 3, 00:29:55.760 "num_base_bdevs_operational": 3, 00:29:55.760 "base_bdevs_list": [ 00:29:55.760 { 00:29:55.760 "name": null, 00:29:55.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.760 "is_configured": false, 00:29:55.760 "data_offset": 0, 00:29:55.760 "data_size": 65536 00:29:55.760 }, 00:29:55.760 { 00:29:55.760 "name": "BaseBdev2", 00:29:55.760 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:55.760 "is_configured": true, 00:29:55.760 "data_offset": 0, 00:29:55.760 "data_size": 65536 00:29:55.760 }, 00:29:55.760 { 00:29:55.760 "name": "BaseBdev3", 00:29:55.760 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:55.760 "is_configured": true, 00:29:55.760 "data_offset": 0, 00:29:55.760 "data_size": 65536 00:29:55.760 }, 00:29:55.760 { 00:29:55.760 "name": "BaseBdev4", 00:29:55.760 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:55.760 "is_configured": true, 00:29:55.760 "data_offset": 0, 00:29:55.760 "data_size": 65536 00:29:55.760 } 00:29:55.760 ] 00:29:55.760 }' 00:29:55.760 17:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.760 17:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.019 17:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:56.019 17:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.019 17:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.019 [2024-11-26 17:26:26.012304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.019 [2024-11-26 17:26:26.028900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:29:56.019 17:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.019 17:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:56.019 [2024-11-26 17:26:26.031378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.957 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:57.216 "name": "raid_bdev1", 00:29:57.216 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:57.216 "strip_size_kb": 0, 00:29:57.216 "state": "online", 00:29:57.216 "raid_level": "raid1", 00:29:57.216 "superblock": false, 00:29:57.216 "num_base_bdevs": 4, 00:29:57.216 "num_base_bdevs_discovered": 4, 00:29:57.216 "num_base_bdevs_operational": 4, 00:29:57.216 "process": { 00:29:57.216 "type": "rebuild", 00:29:57.216 "target": "spare", 00:29:57.216 "progress": { 00:29:57.216 "blocks": 20480, 00:29:57.216 "percent": 31 00:29:57.216 } 00:29:57.216 }, 00:29:57.216 "base_bdevs_list": [ 00:29:57.216 { 00:29:57.216 "name": "spare", 00:29:57.216 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev2", 00:29:57.216 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev3", 00:29:57.216 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev4", 00:29:57.216 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 } 00:29:57.216 ] 00:29:57.216 }' 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.216 [2024-11-26 17:26:27.187015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:57.216 [2024-11-26 17:26:27.239802] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:57.216 [2024-11-26 17:26:27.240171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.216 [2024-11-26 17:26:27.240201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:57.216 [2024-11-26 17:26:27.240219] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:57.216 "name": "raid_bdev1", 00:29:57.216 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:57.216 "strip_size_kb": 0, 00:29:57.216 "state": "online", 00:29:57.216 "raid_level": "raid1", 00:29:57.216 "superblock": false, 00:29:57.216 "num_base_bdevs": 4, 00:29:57.216 "num_base_bdevs_discovered": 3, 00:29:57.216 "num_base_bdevs_operational": 3, 00:29:57.216 "base_bdevs_list": [ 00:29:57.216 { 00:29:57.216 "name": null, 00:29:57.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.216 "is_configured": false, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev2", 00:29:57.216 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev3", 00:29:57.216 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 }, 00:29:57.216 { 00:29:57.216 "name": "BaseBdev4", 00:29:57.216 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:57.216 "is_configured": true, 00:29:57.216 "data_offset": 0, 00:29:57.216 "data_size": 65536 00:29:57.216 } 00:29:57.216 ] 00:29:57.216 }' 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:57.216 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:57.785 "name": "raid_bdev1", 00:29:57.785 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:57.785 "strip_size_kb": 0, 00:29:57.785 "state": "online", 00:29:57.785 "raid_level": "raid1", 00:29:57.785 "superblock": false, 00:29:57.785 "num_base_bdevs": 4, 00:29:57.785 "num_base_bdevs_discovered": 3, 00:29:57.785 "num_base_bdevs_operational": 3, 00:29:57.785 "base_bdevs_list": [ 00:29:57.785 { 00:29:57.785 "name": null, 00:29:57.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.785 "is_configured": false, 00:29:57.785 "data_offset": 0, 00:29:57.785 "data_size": 65536 00:29:57.785 }, 00:29:57.785 { 00:29:57.785 "name": "BaseBdev2", 00:29:57.785 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:57.785 "is_configured": true, 00:29:57.785 "data_offset": 0, 00:29:57.785 "data_size": 65536 00:29:57.785 }, 00:29:57.785 { 00:29:57.785 "name": "BaseBdev3", 00:29:57.785 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:57.785 "is_configured": true, 00:29:57.785 "data_offset": 0, 00:29:57.785 "data_size": 65536 00:29:57.785 }, 00:29:57.785 { 00:29:57.785 "name": "BaseBdev4", 00:29:57.785 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:57.785 "is_configured": true, 00:29:57.785 "data_offset": 0, 00:29:57.785 "data_size": 65536 00:29:57.785 } 00:29:57.785 ] 00:29:57.785 }' 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.785 [2024-11-26 17:26:27.815086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:57.785 [2024-11-26 17:26:27.830615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.785 17:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:57.785 [2024-11-26 17:26:27.833043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:59.164 "name": "raid_bdev1", 00:29:59.164 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:59.164 "strip_size_kb": 0, 00:29:59.164 "state": "online", 00:29:59.164 "raid_level": "raid1", 00:29:59.164 "superblock": false, 00:29:59.164 "num_base_bdevs": 4, 00:29:59.164 "num_base_bdevs_discovered": 4, 00:29:59.164 "num_base_bdevs_operational": 4, 00:29:59.164 "process": { 00:29:59.164 "type": "rebuild", 00:29:59.164 "target": "spare", 00:29:59.164 "progress": { 00:29:59.164 "blocks": 20480, 00:29:59.164 "percent": 31 00:29:59.164 } 00:29:59.164 }, 00:29:59.164 "base_bdevs_list": [ 00:29:59.164 { 00:29:59.164 "name": "spare", 00:29:59.164 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:29:59.164 "is_configured": true, 00:29:59.164 "data_offset": 0, 00:29:59.164 "data_size": 65536 00:29:59.164 }, 00:29:59.164 { 00:29:59.164 "name": "BaseBdev2", 00:29:59.164 "uuid": "276db7f1-2a18-5eb5-bfbc-6c862eebd997", 00:29:59.164 "is_configured": true, 00:29:59.164 "data_offset": 0, 00:29:59.164 "data_size": 65536 00:29:59.164 }, 00:29:59.164 { 00:29:59.164 "name": "BaseBdev3", 00:29:59.164 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:59.164 "is_configured": true, 00:29:59.164 "data_offset": 0, 00:29:59.164 "data_size": 65536 00:29:59.164 }, 00:29:59.164 { 00:29:59.164 "name": "BaseBdev4", 00:29:59.164 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:59.164 "is_configured": true, 00:29:59.164 "data_offset": 0, 00:29:59.164 "data_size": 65536 00:29:59.164 } 00:29:59.164 ] 00:29:59.164 }' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.164 17:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.164 [2024-11-26 17:26:28.992520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:59.165 [2024-11-26 17:26:29.042028] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:59.165 "name": "raid_bdev1", 00:29:59.165 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:59.165 "strip_size_kb": 0, 00:29:59.165 "state": "online", 00:29:59.165 "raid_level": "raid1", 00:29:59.165 "superblock": false, 00:29:59.165 "num_base_bdevs": 4, 00:29:59.165 "num_base_bdevs_discovered": 3, 00:29:59.165 "num_base_bdevs_operational": 3, 00:29:59.165 "process": { 00:29:59.165 "type": "rebuild", 00:29:59.165 "target": "spare", 00:29:59.165 "progress": { 00:29:59.165 "blocks": 24576, 00:29:59.165 "percent": 37 00:29:59.165 } 00:29:59.165 }, 00:29:59.165 "base_bdevs_list": [ 00:29:59.165 { 00:29:59.165 "name": "spare", 00:29:59.165 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": null, 00:29:59.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.165 "is_configured": false, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": "BaseBdev3", 00:29:59.165 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": "BaseBdev4", 00:29:59.165 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 } 00:29:59.165 ] 00:29:59.165 }' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:59.165 "name": "raid_bdev1", 00:29:59.165 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:29:59.165 "strip_size_kb": 0, 00:29:59.165 "state": "online", 00:29:59.165 "raid_level": "raid1", 00:29:59.165 "superblock": false, 00:29:59.165 "num_base_bdevs": 4, 00:29:59.165 "num_base_bdevs_discovered": 3, 00:29:59.165 "num_base_bdevs_operational": 3, 00:29:59.165 "process": { 00:29:59.165 "type": "rebuild", 00:29:59.165 "target": "spare", 00:29:59.165 "progress": { 00:29:59.165 "blocks": 26624, 00:29:59.165 "percent": 40 00:29:59.165 } 00:29:59.165 }, 00:29:59.165 "base_bdevs_list": [ 00:29:59.165 { 00:29:59.165 "name": "spare", 00:29:59.165 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": null, 00:29:59.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.165 "is_configured": false, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": "BaseBdev3", 00:29:59.165 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 }, 00:29:59.165 { 00:29:59.165 "name": "BaseBdev4", 00:29:59.165 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:29:59.165 "is_configured": true, 00:29:59.165 "data_offset": 0, 00:29:59.165 "data_size": 65536 00:29:59.165 } 00:29:59.165 ] 00:29:59.165 }' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.165 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:59.423 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.423 17:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.360 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:00.360 "name": "raid_bdev1", 00:30:00.360 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:30:00.360 "strip_size_kb": 0, 00:30:00.360 "state": "online", 00:30:00.360 "raid_level": "raid1", 00:30:00.360 "superblock": false, 00:30:00.360 "num_base_bdevs": 4, 00:30:00.360 "num_base_bdevs_discovered": 3, 00:30:00.360 "num_base_bdevs_operational": 3, 00:30:00.360 "process": { 00:30:00.360 "type": "rebuild", 00:30:00.360 "target": "spare", 00:30:00.360 "progress": { 00:30:00.360 "blocks": 49152, 00:30:00.360 "percent": 75 00:30:00.360 } 00:30:00.360 }, 00:30:00.360 "base_bdevs_list": [ 00:30:00.360 { 00:30:00.361 "name": "spare", 00:30:00.361 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:30:00.361 "is_configured": true, 00:30:00.361 "data_offset": 0, 00:30:00.361 "data_size": 65536 00:30:00.361 }, 00:30:00.361 { 00:30:00.361 "name": null, 00:30:00.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.361 "is_configured": false, 00:30:00.361 "data_offset": 0, 00:30:00.361 "data_size": 65536 00:30:00.361 }, 00:30:00.361 { 00:30:00.361 "name": "BaseBdev3", 00:30:00.361 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:30:00.361 "is_configured": true, 00:30:00.361 "data_offset": 0, 00:30:00.361 "data_size": 65536 00:30:00.361 }, 00:30:00.361 { 00:30:00.361 "name": "BaseBdev4", 00:30:00.361 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:30:00.361 "is_configured": true, 00:30:00.361 "data_offset": 0, 00:30:00.361 "data_size": 65536 00:30:00.361 } 00:30:00.361 ] 00:30:00.361 }' 00:30:00.361 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:00.361 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:00.361 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:00.361 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:00.361 17:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:01.298 [2024-11-26 17:26:31.056319] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:01.298 [2024-11-26 17:26:31.056409] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:01.298 [2024-11-26 17:26:31.056476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:01.557 "name": "raid_bdev1", 00:30:01.557 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:30:01.557 "strip_size_kb": 0, 00:30:01.557 "state": "online", 00:30:01.557 "raid_level": "raid1", 00:30:01.557 "superblock": false, 00:30:01.557 "num_base_bdevs": 4, 00:30:01.557 "num_base_bdevs_discovered": 3, 00:30:01.557 "num_base_bdevs_operational": 3, 00:30:01.557 "base_bdevs_list": [ 00:30:01.557 { 00:30:01.557 "name": "spare", 00:30:01.557 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": null, 00:30:01.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.557 "is_configured": false, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": "BaseBdev3", 00:30:01.557 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": "BaseBdev4", 00:30:01.557 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 } 00:30:01.557 ] 00:30:01.557 }' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:01.557 "name": "raid_bdev1", 00:30:01.557 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:30:01.557 "strip_size_kb": 0, 00:30:01.557 "state": "online", 00:30:01.557 "raid_level": "raid1", 00:30:01.557 "superblock": false, 00:30:01.557 "num_base_bdevs": 4, 00:30:01.557 "num_base_bdevs_discovered": 3, 00:30:01.557 "num_base_bdevs_operational": 3, 00:30:01.557 "base_bdevs_list": [ 00:30:01.557 { 00:30:01.557 "name": "spare", 00:30:01.557 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": null, 00:30:01.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.557 "is_configured": false, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": "BaseBdev3", 00:30:01.557 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 }, 00:30:01.557 { 00:30:01.557 "name": "BaseBdev4", 00:30:01.557 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:30:01.557 "is_configured": true, 00:30:01.557 "data_offset": 0, 00:30:01.557 "data_size": 65536 00:30:01.557 } 00:30:01.557 ] 00:30:01.557 }' 00:30:01.557 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:01.816 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:01.817 "name": "raid_bdev1", 00:30:01.817 "uuid": "8cff2a5a-a3ef-4561-8dc9-c64ec96e27b8", 00:30:01.817 "strip_size_kb": 0, 00:30:01.817 "state": "online", 00:30:01.817 "raid_level": "raid1", 00:30:01.817 "superblock": false, 00:30:01.817 "num_base_bdevs": 4, 00:30:01.817 "num_base_bdevs_discovered": 3, 00:30:01.817 "num_base_bdevs_operational": 3, 00:30:01.817 "base_bdevs_list": [ 00:30:01.817 { 00:30:01.817 "name": "spare", 00:30:01.817 "uuid": "0aaabc18-b069-55b3-8212-f54c51ddda64", 00:30:01.817 "is_configured": true, 00:30:01.817 "data_offset": 0, 00:30:01.817 "data_size": 65536 00:30:01.817 }, 00:30:01.817 { 00:30:01.817 "name": null, 00:30:01.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.817 "is_configured": false, 00:30:01.817 "data_offset": 0, 00:30:01.817 "data_size": 65536 00:30:01.817 }, 00:30:01.817 { 00:30:01.817 "name": "BaseBdev3", 00:30:01.817 "uuid": "6e1b11ac-f929-5f71-bf3f-00f717e728a3", 00:30:01.817 "is_configured": true, 00:30:01.817 "data_offset": 0, 00:30:01.817 "data_size": 65536 00:30:01.817 }, 00:30:01.817 { 00:30:01.817 "name": "BaseBdev4", 00:30:01.817 "uuid": "e5653a00-559c-5846-b7c7-94aa02f2fef4", 00:30:01.817 "is_configured": true, 00:30:01.817 "data_offset": 0, 00:30:01.817 "data_size": 65536 00:30:01.817 } 00:30:01.817 ] 00:30:01.817 }' 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:01.817 17:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 [2024-11-26 17:26:32.191935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:02.413 [2024-11-26 17:26:32.191979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:02.413 [2024-11-26 17:26:32.192084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:02.413 [2024-11-26 17:26:32.192180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:02.413 [2024-11-26 17:26:32.192193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:02.413 /dev/nbd0 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:02.413 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:02.414 1+0 records in 00:30:02.414 1+0 records out 00:30:02.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474431 s, 8.6 MB/s 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:30:02.414 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:02.690 /dev/nbd1 00:30:02.690 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:02.948 1+0 records in 00:30:02.948 1+0 records out 00:30:02.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551823 s, 7.4 MB/s 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:02.948 17:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.948 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:03.206 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77673 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77673 ']' 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77673 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77673 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.464 killing process with pid 77673 00:30:03.464 Received shutdown signal, test time was about 60.000000 seconds 00:30:03.464 00:30:03.464 Latency(us) 00:30:03.464 [2024-11-26T17:26:33.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.464 [2024-11-26T17:26:33.578Z] =================================================================================================================== 00:30:03.464 [2024-11-26T17:26:33.578Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77673' 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77673 00:30:03.464 [2024-11-26 17:26:33.555065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:03.464 17:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77673 00:30:04.029 [2024-11-26 17:26:34.093126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:05.401 17:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:30:05.401 00:30:05.401 real 0m18.765s 00:30:05.401 user 0m20.249s 00:30:05.401 sys 0m4.046s 00:30:05.401 17:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.401 ************************************ 00:30:05.401 END TEST raid_rebuild_test 00:30:05.401 ************************************ 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.402 17:26:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:30:05.402 17:26:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:05.402 17:26:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.402 17:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:05.402 ************************************ 00:30:05.402 START TEST raid_rebuild_test_sb 00:30:05.402 ************************************ 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78136 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78136 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78136 ']' 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.402 17:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.660 [2024-11-26 17:26:35.556293] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:30:05.660 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:05.660 Zero copy mechanism will not be used. 00:30:05.660 [2024-11-26 17:26:35.557343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78136 ] 00:30:05.660 [2024-11-26 17:26:35.760109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.918 [2024-11-26 17:26:35.909329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.176 [2024-11-26 17:26:36.148383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.176 [2024-11-26 17:26:36.148651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.434 BaseBdev1_malloc 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.434 [2024-11-26 17:26:36.511358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:06.434 [2024-11-26 17:26:36.511615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.434 [2024-11-26 17:26:36.511689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:06.434 [2024-11-26 17:26:36.511795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.434 [2024-11-26 17:26:36.514760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.434 [2024-11-26 17:26:36.514810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:06.434 BaseBdev1 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.434 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 BaseBdev2_malloc 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 [2024-11-26 17:26:36.572480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:06.692 [2024-11-26 17:26:36.572753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.692 [2024-11-26 17:26:36.572812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:06.692 [2024-11-26 17:26:36.572831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.692 [2024-11-26 17:26:36.575840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.692 BaseBdev2 00:30:06.692 [2024-11-26 17:26:36.576005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 BaseBdev3_malloc 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 [2024-11-26 17:26:36.648623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:06.692 [2024-11-26 17:26:36.648853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.692 [2024-11-26 17:26:36.648930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:06.692 [2024-11-26 17:26:36.649031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.692 [2024-11-26 17:26:36.652233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.692 [2024-11-26 17:26:36.652399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:06.692 BaseBdev3 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 BaseBdev4_malloc 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.692 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.692 [2024-11-26 17:26:36.712928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:06.692 [2024-11-26 17:26:36.713141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.692 [2024-11-26 17:26:36.713213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:06.692 [2024-11-26 17:26:36.713305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.692 [2024-11-26 17:26:36.716313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.693 [2024-11-26 17:26:36.716366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:06.693 BaseBdev4 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.693 spare_malloc 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.693 spare_delay 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.693 [2024-11-26 17:26:36.790036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:06.693 [2024-11-26 17:26:36.790240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.693 [2024-11-26 17:26:36.790321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:06.693 [2024-11-26 17:26:36.790415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.693 [2024-11-26 17:26:36.793456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.693 [2024-11-26 17:26:36.793688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:06.693 spare 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.693 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.693 [2024-11-26 17:26:36.802146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:06.693 [2024-11-26 17:26:36.804635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:07.033 [2024-11-26 17:26:36.804826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:07.033 [2024-11-26 17:26:36.804933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:07.033 [2024-11-26 17:26:36.805264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:07.033 [2024-11-26 17:26:36.805387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:07.033 [2024-11-26 17:26:36.805749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:07.033 [2024-11-26 17:26:36.806069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:07.033 [2024-11-26 17:26:36.806163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:07.033 [2024-11-26 17:26:36.806541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.033 "name": "raid_bdev1", 00:30:07.033 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:07.033 "strip_size_kb": 0, 00:30:07.033 "state": "online", 00:30:07.033 "raid_level": "raid1", 00:30:07.033 "superblock": true, 00:30:07.033 "num_base_bdevs": 4, 00:30:07.033 "num_base_bdevs_discovered": 4, 00:30:07.033 "num_base_bdevs_operational": 4, 00:30:07.033 "base_bdevs_list": [ 00:30:07.033 { 00:30:07.033 "name": "BaseBdev1", 00:30:07.033 "uuid": "4213af2b-4358-599e-8814-5bb60c25ad5e", 00:30:07.033 "is_configured": true, 00:30:07.033 "data_offset": 2048, 00:30:07.033 "data_size": 63488 00:30:07.033 }, 00:30:07.033 { 00:30:07.033 "name": "BaseBdev2", 00:30:07.033 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:07.033 "is_configured": true, 00:30:07.033 "data_offset": 2048, 00:30:07.033 "data_size": 63488 00:30:07.033 }, 00:30:07.033 { 00:30:07.033 "name": "BaseBdev3", 00:30:07.033 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:07.033 "is_configured": true, 00:30:07.033 "data_offset": 2048, 00:30:07.033 "data_size": 63488 00:30:07.033 }, 00:30:07.033 { 00:30:07.033 "name": "BaseBdev4", 00:30:07.033 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:07.033 "is_configured": true, 00:30:07.033 "data_offset": 2048, 00:30:07.033 "data_size": 63488 00:30:07.033 } 00:30:07.033 ] 00:30:07.033 }' 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.033 17:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:07.305 [2024-11-26 17:26:37.282265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:07.305 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:07.564 [2024-11-26 17:26:37.577860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:07.564 /dev/nbd0 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:07.564 1+0 records in 00:30:07.564 1+0 records out 00:30:07.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295298 s, 13.9 MB/s 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:30:07.564 17:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:14.126 63488+0 records in 00:30:14.126 63488+0 records out 00:30:14.126 32505856 bytes (33 MB, 31 MiB) copied, 6.44139 s, 5.0 MB/s 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.126 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:14.415 [2024-11-26 17:26:44.308914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.415 [2024-11-26 17:26:44.344977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.415 "name": "raid_bdev1", 00:30:14.415 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:14.415 "strip_size_kb": 0, 00:30:14.415 "state": "online", 00:30:14.415 "raid_level": "raid1", 00:30:14.415 "superblock": true, 00:30:14.415 "num_base_bdevs": 4, 00:30:14.415 "num_base_bdevs_discovered": 3, 00:30:14.415 "num_base_bdevs_operational": 3, 00:30:14.415 "base_bdevs_list": [ 00:30:14.415 { 00:30:14.415 "name": null, 00:30:14.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.415 "is_configured": false, 00:30:14.415 "data_offset": 0, 00:30:14.415 "data_size": 63488 00:30:14.415 }, 00:30:14.415 { 00:30:14.415 "name": "BaseBdev2", 00:30:14.415 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:14.415 "is_configured": true, 00:30:14.415 "data_offset": 2048, 00:30:14.415 "data_size": 63488 00:30:14.415 }, 00:30:14.415 { 00:30:14.415 "name": "BaseBdev3", 00:30:14.415 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:14.415 "is_configured": true, 00:30:14.415 "data_offset": 2048, 00:30:14.415 "data_size": 63488 00:30:14.415 }, 00:30:14.415 { 00:30:14.415 "name": "BaseBdev4", 00:30:14.415 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:14.415 "is_configured": true, 00:30:14.415 "data_offset": 2048, 00:30:14.415 "data_size": 63488 00:30:14.415 } 00:30:14.415 ] 00:30:14.415 }' 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.415 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.983 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:14.983 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.983 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.983 [2024-11-26 17:26:44.812334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.983 [2024-11-26 17:26:44.829972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:30:14.983 17:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.983 17:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:14.983 [2024-11-26 17:26:44.832459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:15.920 "name": "raid_bdev1", 00:30:15.920 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:15.920 "strip_size_kb": 0, 00:30:15.920 "state": "online", 00:30:15.920 "raid_level": "raid1", 00:30:15.920 "superblock": true, 00:30:15.920 "num_base_bdevs": 4, 00:30:15.920 "num_base_bdevs_discovered": 4, 00:30:15.920 "num_base_bdevs_operational": 4, 00:30:15.920 "process": { 00:30:15.920 "type": "rebuild", 00:30:15.920 "target": "spare", 00:30:15.920 "progress": { 00:30:15.920 "blocks": 20480, 00:30:15.920 "percent": 32 00:30:15.920 } 00:30:15.920 }, 00:30:15.920 "base_bdevs_list": [ 00:30:15.920 { 00:30:15.920 "name": "spare", 00:30:15.920 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:15.920 "is_configured": true, 00:30:15.920 "data_offset": 2048, 00:30:15.920 "data_size": 63488 00:30:15.920 }, 00:30:15.920 { 00:30:15.920 "name": "BaseBdev2", 00:30:15.920 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:15.920 "is_configured": true, 00:30:15.920 "data_offset": 2048, 00:30:15.920 "data_size": 63488 00:30:15.920 }, 00:30:15.920 { 00:30:15.920 "name": "BaseBdev3", 00:30:15.920 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:15.920 "is_configured": true, 00:30:15.920 "data_offset": 2048, 00:30:15.920 "data_size": 63488 00:30:15.920 }, 00:30:15.920 { 00:30:15.920 "name": "BaseBdev4", 00:30:15.920 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:15.920 "is_configured": true, 00:30:15.920 "data_offset": 2048, 00:30:15.920 "data_size": 63488 00:30:15.920 } 00:30:15.920 ] 00:30:15.920 }' 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.920 17:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.920 [2024-11-26 17:26:45.992093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:16.179 [2024-11-26 17:26:46.040705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:16.179 [2024-11-26 17:26:46.040978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.179 [2024-11-26 17:26:46.041005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:16.179 [2024-11-26 17:26:46.041021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:16.179 "name": "raid_bdev1", 00:30:16.179 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:16.179 "strip_size_kb": 0, 00:30:16.179 "state": "online", 00:30:16.179 "raid_level": "raid1", 00:30:16.179 "superblock": true, 00:30:16.179 "num_base_bdevs": 4, 00:30:16.179 "num_base_bdevs_discovered": 3, 00:30:16.179 "num_base_bdevs_operational": 3, 00:30:16.179 "base_bdevs_list": [ 00:30:16.179 { 00:30:16.179 "name": null, 00:30:16.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.179 "is_configured": false, 00:30:16.179 "data_offset": 0, 00:30:16.179 "data_size": 63488 00:30:16.179 }, 00:30:16.179 { 00:30:16.179 "name": "BaseBdev2", 00:30:16.179 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:16.179 "is_configured": true, 00:30:16.179 "data_offset": 2048, 00:30:16.179 "data_size": 63488 00:30:16.179 }, 00:30:16.179 { 00:30:16.179 "name": "BaseBdev3", 00:30:16.179 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:16.179 "is_configured": true, 00:30:16.179 "data_offset": 2048, 00:30:16.179 "data_size": 63488 00:30:16.179 }, 00:30:16.179 { 00:30:16.179 "name": "BaseBdev4", 00:30:16.179 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:16.179 "is_configured": true, 00:30:16.179 "data_offset": 2048, 00:30:16.179 "data_size": 63488 00:30:16.179 } 00:30:16.179 ] 00:30:16.179 }' 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:16.179 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.437 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.698 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.698 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:16.698 "name": "raid_bdev1", 00:30:16.698 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:16.698 "strip_size_kb": 0, 00:30:16.698 "state": "online", 00:30:16.698 "raid_level": "raid1", 00:30:16.698 "superblock": true, 00:30:16.698 "num_base_bdevs": 4, 00:30:16.698 "num_base_bdevs_discovered": 3, 00:30:16.698 "num_base_bdevs_operational": 3, 00:30:16.698 "base_bdevs_list": [ 00:30:16.698 { 00:30:16.698 "name": null, 00:30:16.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.698 "is_configured": false, 00:30:16.698 "data_offset": 0, 00:30:16.698 "data_size": 63488 00:30:16.698 }, 00:30:16.698 { 00:30:16.698 "name": "BaseBdev2", 00:30:16.699 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:16.699 "is_configured": true, 00:30:16.699 "data_offset": 2048, 00:30:16.699 "data_size": 63488 00:30:16.699 }, 00:30:16.699 { 00:30:16.699 "name": "BaseBdev3", 00:30:16.699 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:16.699 "is_configured": true, 00:30:16.699 "data_offset": 2048, 00:30:16.699 "data_size": 63488 00:30:16.699 }, 00:30:16.699 { 00:30:16.699 "name": "BaseBdev4", 00:30:16.699 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:16.699 "is_configured": true, 00:30:16.699 "data_offset": 2048, 00:30:16.699 "data_size": 63488 00:30:16.699 } 00:30:16.699 ] 00:30:16.699 }' 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.699 [2024-11-26 17:26:46.675906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:16.699 [2024-11-26 17:26:46.692345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.699 17:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:16.699 [2024-11-26 17:26:46.694998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:17.642 "name": "raid_bdev1", 00:30:17.642 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:17.642 "strip_size_kb": 0, 00:30:17.642 "state": "online", 00:30:17.642 "raid_level": "raid1", 00:30:17.642 "superblock": true, 00:30:17.642 "num_base_bdevs": 4, 00:30:17.642 "num_base_bdevs_discovered": 4, 00:30:17.642 "num_base_bdevs_operational": 4, 00:30:17.642 "process": { 00:30:17.642 "type": "rebuild", 00:30:17.642 "target": "spare", 00:30:17.642 "progress": { 00:30:17.642 "blocks": 20480, 00:30:17.642 "percent": 32 00:30:17.642 } 00:30:17.642 }, 00:30:17.642 "base_bdevs_list": [ 00:30:17.642 { 00:30:17.642 "name": "spare", 00:30:17.642 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:17.642 "is_configured": true, 00:30:17.642 "data_offset": 2048, 00:30:17.642 "data_size": 63488 00:30:17.642 }, 00:30:17.642 { 00:30:17.642 "name": "BaseBdev2", 00:30:17.642 "uuid": "57e38ca2-866c-55f1-b40c-698317faa3dc", 00:30:17.642 "is_configured": true, 00:30:17.642 "data_offset": 2048, 00:30:17.642 "data_size": 63488 00:30:17.642 }, 00:30:17.642 { 00:30:17.642 "name": "BaseBdev3", 00:30:17.642 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:17.642 "is_configured": true, 00:30:17.642 "data_offset": 2048, 00:30:17.642 "data_size": 63488 00:30:17.642 }, 00:30:17.642 { 00:30:17.642 "name": "BaseBdev4", 00:30:17.642 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:17.642 "is_configured": true, 00:30:17.642 "data_offset": 2048, 00:30:17.642 "data_size": 63488 00:30:17.642 } 00:30:17.642 ] 00:30:17.642 }' 00:30:17.642 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:17.901 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.901 17:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.901 [2024-11-26 17:26:47.851180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:17.901 [2024-11-26 17:26:48.003105] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:17.901 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:18.161 "name": "raid_bdev1", 00:30:18.161 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:18.161 "strip_size_kb": 0, 00:30:18.161 "state": "online", 00:30:18.161 "raid_level": "raid1", 00:30:18.161 "superblock": true, 00:30:18.161 "num_base_bdevs": 4, 00:30:18.161 "num_base_bdevs_discovered": 3, 00:30:18.161 "num_base_bdevs_operational": 3, 00:30:18.161 "process": { 00:30:18.161 "type": "rebuild", 00:30:18.161 "target": "spare", 00:30:18.161 "progress": { 00:30:18.161 "blocks": 24576, 00:30:18.161 "percent": 38 00:30:18.161 } 00:30:18.161 }, 00:30:18.161 "base_bdevs_list": [ 00:30:18.161 { 00:30:18.161 "name": "spare", 00:30:18.161 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:18.161 "is_configured": true, 00:30:18.161 "data_offset": 2048, 00:30:18.161 "data_size": 63488 00:30:18.161 }, 00:30:18.161 { 00:30:18.161 "name": null, 00:30:18.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.161 "is_configured": false, 00:30:18.161 "data_offset": 0, 00:30:18.161 "data_size": 63488 00:30:18.161 }, 00:30:18.161 { 00:30:18.161 "name": "BaseBdev3", 00:30:18.161 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:18.161 "is_configured": true, 00:30:18.161 "data_offset": 2048, 00:30:18.161 "data_size": 63488 00:30:18.161 }, 00:30:18.161 { 00:30:18.161 "name": "BaseBdev4", 00:30:18.161 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:18.161 "is_configured": true, 00:30:18.161 "data_offset": 2048, 00:30:18.161 "data_size": 63488 00:30:18.161 } 00:30:18.161 ] 00:30:18.161 }' 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:18.161 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:18.162 "name": "raid_bdev1", 00:30:18.162 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:18.162 "strip_size_kb": 0, 00:30:18.162 "state": "online", 00:30:18.162 "raid_level": "raid1", 00:30:18.162 "superblock": true, 00:30:18.162 "num_base_bdevs": 4, 00:30:18.162 "num_base_bdevs_discovered": 3, 00:30:18.162 "num_base_bdevs_operational": 3, 00:30:18.162 "process": { 00:30:18.162 "type": "rebuild", 00:30:18.162 "target": "spare", 00:30:18.162 "progress": { 00:30:18.162 "blocks": 26624, 00:30:18.162 "percent": 41 00:30:18.162 } 00:30:18.162 }, 00:30:18.162 "base_bdevs_list": [ 00:30:18.162 { 00:30:18.162 "name": "spare", 00:30:18.162 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:18.162 "is_configured": true, 00:30:18.162 "data_offset": 2048, 00:30:18.162 "data_size": 63488 00:30:18.162 }, 00:30:18.162 { 00:30:18.162 "name": null, 00:30:18.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.162 "is_configured": false, 00:30:18.162 "data_offset": 0, 00:30:18.162 "data_size": 63488 00:30:18.162 }, 00:30:18.162 { 00:30:18.162 "name": "BaseBdev3", 00:30:18.162 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:18.162 "is_configured": true, 00:30:18.162 "data_offset": 2048, 00:30:18.162 "data_size": 63488 00:30:18.162 }, 00:30:18.162 { 00:30:18.162 "name": "BaseBdev4", 00:30:18.162 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:18.162 "is_configured": true, 00:30:18.162 "data_offset": 2048, 00:30:18.162 "data_size": 63488 00:30:18.162 } 00:30:18.162 ] 00:30:18.162 }' 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:18.162 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:18.421 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:18.421 17:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:19.357 "name": "raid_bdev1", 00:30:19.357 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:19.357 "strip_size_kb": 0, 00:30:19.357 "state": "online", 00:30:19.357 "raid_level": "raid1", 00:30:19.357 "superblock": true, 00:30:19.357 "num_base_bdevs": 4, 00:30:19.357 "num_base_bdevs_discovered": 3, 00:30:19.357 "num_base_bdevs_operational": 3, 00:30:19.357 "process": { 00:30:19.357 "type": "rebuild", 00:30:19.357 "target": "spare", 00:30:19.357 "progress": { 00:30:19.357 "blocks": 51200, 00:30:19.357 "percent": 80 00:30:19.357 } 00:30:19.357 }, 00:30:19.357 "base_bdevs_list": [ 00:30:19.357 { 00:30:19.357 "name": "spare", 00:30:19.357 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:19.357 "is_configured": true, 00:30:19.357 "data_offset": 2048, 00:30:19.357 "data_size": 63488 00:30:19.357 }, 00:30:19.357 { 00:30:19.357 "name": null, 00:30:19.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.357 "is_configured": false, 00:30:19.357 "data_offset": 0, 00:30:19.357 "data_size": 63488 00:30:19.357 }, 00:30:19.357 { 00:30:19.357 "name": "BaseBdev3", 00:30:19.357 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:19.357 "is_configured": true, 00:30:19.357 "data_offset": 2048, 00:30:19.357 "data_size": 63488 00:30:19.357 }, 00:30:19.357 { 00:30:19.357 "name": "BaseBdev4", 00:30:19.357 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:19.357 "is_configured": true, 00:30:19.357 "data_offset": 2048, 00:30:19.357 "data_size": 63488 00:30:19.357 } 00:30:19.357 ] 00:30:19.357 }' 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:19.357 17:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:19.924 [2024-11-26 17:26:49.916431] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:19.924 [2024-11-26 17:26:49.916552] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:19.924 [2024-11-26 17:26:49.916738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:20.494 "name": "raid_bdev1", 00:30:20.494 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:20.494 "strip_size_kb": 0, 00:30:20.494 "state": "online", 00:30:20.494 "raid_level": "raid1", 00:30:20.494 "superblock": true, 00:30:20.494 "num_base_bdevs": 4, 00:30:20.494 "num_base_bdevs_discovered": 3, 00:30:20.494 "num_base_bdevs_operational": 3, 00:30:20.494 "base_bdevs_list": [ 00:30:20.494 { 00:30:20.494 "name": "spare", 00:30:20.494 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:20.494 "is_configured": true, 00:30:20.494 "data_offset": 2048, 00:30:20.494 "data_size": 63488 00:30:20.494 }, 00:30:20.494 { 00:30:20.494 "name": null, 00:30:20.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.494 "is_configured": false, 00:30:20.494 "data_offset": 0, 00:30:20.494 "data_size": 63488 00:30:20.494 }, 00:30:20.494 { 00:30:20.494 "name": "BaseBdev3", 00:30:20.494 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:20.494 "is_configured": true, 00:30:20.494 "data_offset": 2048, 00:30:20.494 "data_size": 63488 00:30:20.494 }, 00:30:20.494 { 00:30:20.494 "name": "BaseBdev4", 00:30:20.494 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:20.494 "is_configured": true, 00:30:20.494 "data_offset": 2048, 00:30:20.494 "data_size": 63488 00:30:20.494 } 00:30:20.494 ] 00:30:20.494 }' 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:20.494 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.753 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:20.753 "name": "raid_bdev1", 00:30:20.753 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:20.753 "strip_size_kb": 0, 00:30:20.754 "state": "online", 00:30:20.754 "raid_level": "raid1", 00:30:20.754 "superblock": true, 00:30:20.754 "num_base_bdevs": 4, 00:30:20.754 "num_base_bdevs_discovered": 3, 00:30:20.754 "num_base_bdevs_operational": 3, 00:30:20.754 "base_bdevs_list": [ 00:30:20.754 { 00:30:20.754 "name": "spare", 00:30:20.754 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": null, 00:30:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.754 "is_configured": false, 00:30:20.754 "data_offset": 0, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": "BaseBdev3", 00:30:20.754 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": "BaseBdev4", 00:30:20.754 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 } 00:30:20.754 ] 00:30:20.754 }' 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.754 "name": "raid_bdev1", 00:30:20.754 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:20.754 "strip_size_kb": 0, 00:30:20.754 "state": "online", 00:30:20.754 "raid_level": "raid1", 00:30:20.754 "superblock": true, 00:30:20.754 "num_base_bdevs": 4, 00:30:20.754 "num_base_bdevs_discovered": 3, 00:30:20.754 "num_base_bdevs_operational": 3, 00:30:20.754 "base_bdevs_list": [ 00:30:20.754 { 00:30:20.754 "name": "spare", 00:30:20.754 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": null, 00:30:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.754 "is_configured": false, 00:30:20.754 "data_offset": 0, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": "BaseBdev3", 00:30:20.754 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 }, 00:30:20.754 { 00:30:20.754 "name": "BaseBdev4", 00:30:20.754 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:20.754 "is_configured": true, 00:30:20.754 "data_offset": 2048, 00:30:20.754 "data_size": 63488 00:30:20.754 } 00:30:20.754 ] 00:30:20.754 }' 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.754 17:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.354 [2024-11-26 17:26:51.202883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:21.354 [2024-11-26 17:26:51.202924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:21.354 [2024-11-26 17:26:51.203041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.354 [2024-11-26 17:26:51.203136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.354 [2024-11-26 17:26:51.203150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:21.354 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:21.612 /dev/nbd0 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:21.612 1+0 records in 00:30:21.612 1+0 records out 00:30:21.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276857 s, 14.8 MB/s 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:21.612 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:21.871 /dev/nbd1 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:21.871 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:21.871 1+0 records in 00:30:21.871 1+0 records out 00:30:21.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050084 s, 8.2 MB/s 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:21.872 17:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:22.130 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:22.390 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.650 [2024-11-26 17:26:52.564195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:22.650 [2024-11-26 17:26:52.564274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:22.650 [2024-11-26 17:26:52.564304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:22.650 [2024-11-26 17:26:52.564317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:22.650 [2024-11-26 17:26:52.567136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:22.650 [2024-11-26 17:26:52.567181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:22.650 [2024-11-26 17:26:52.567296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:22.650 [2024-11-26 17:26:52.567351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:22.650 [2024-11-26 17:26:52.567531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:22.650 [2024-11-26 17:26:52.567633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:22.650 spare 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.650 [2024-11-26 17:26:52.667583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:22.650 [2024-11-26 17:26:52.667649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:22.650 [2024-11-26 17:26:52.668094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:30:22.650 [2024-11-26 17:26:52.668337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:22.650 [2024-11-26 17:26:52.668361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:22.650 [2024-11-26 17:26:52.668617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.650 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.651 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.651 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:22.651 "name": "raid_bdev1", 00:30:22.651 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:22.651 "strip_size_kb": 0, 00:30:22.651 "state": "online", 00:30:22.651 "raid_level": "raid1", 00:30:22.651 "superblock": true, 00:30:22.651 "num_base_bdevs": 4, 00:30:22.651 "num_base_bdevs_discovered": 3, 00:30:22.651 "num_base_bdevs_operational": 3, 00:30:22.651 "base_bdevs_list": [ 00:30:22.651 { 00:30:22.651 "name": "spare", 00:30:22.651 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:22.651 "is_configured": true, 00:30:22.651 "data_offset": 2048, 00:30:22.651 "data_size": 63488 00:30:22.651 }, 00:30:22.651 { 00:30:22.651 "name": null, 00:30:22.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.651 "is_configured": false, 00:30:22.651 "data_offset": 2048, 00:30:22.651 "data_size": 63488 00:30:22.651 }, 00:30:22.651 { 00:30:22.651 "name": "BaseBdev3", 00:30:22.651 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:22.651 "is_configured": true, 00:30:22.651 "data_offset": 2048, 00:30:22.651 "data_size": 63488 00:30:22.651 }, 00:30:22.651 { 00:30:22.651 "name": "BaseBdev4", 00:30:22.651 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:22.651 "is_configured": true, 00:30:22.651 "data_offset": 2048, 00:30:22.651 "data_size": 63488 00:30:22.651 } 00:30:22.651 ] 00:30:22.651 }' 00:30:22.651 17:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:22.651 17:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.220 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:23.220 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:23.220 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:23.220 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:23.220 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:23.221 "name": "raid_bdev1", 00:30:23.221 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:23.221 "strip_size_kb": 0, 00:30:23.221 "state": "online", 00:30:23.221 "raid_level": "raid1", 00:30:23.221 "superblock": true, 00:30:23.221 "num_base_bdevs": 4, 00:30:23.221 "num_base_bdevs_discovered": 3, 00:30:23.221 "num_base_bdevs_operational": 3, 00:30:23.221 "base_bdevs_list": [ 00:30:23.221 { 00:30:23.221 "name": "spare", 00:30:23.221 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:23.221 "is_configured": true, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": null, 00:30:23.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.221 "is_configured": false, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": "BaseBdev3", 00:30:23.221 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:23.221 "is_configured": true, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": "BaseBdev4", 00:30:23.221 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:23.221 "is_configured": true, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 } 00:30:23.221 ] 00:30:23.221 }' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.221 [2024-11-26 17:26:53.239762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:23.221 "name": "raid_bdev1", 00:30:23.221 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:23.221 "strip_size_kb": 0, 00:30:23.221 "state": "online", 00:30:23.221 "raid_level": "raid1", 00:30:23.221 "superblock": true, 00:30:23.221 "num_base_bdevs": 4, 00:30:23.221 "num_base_bdevs_discovered": 2, 00:30:23.221 "num_base_bdevs_operational": 2, 00:30:23.221 "base_bdevs_list": [ 00:30:23.221 { 00:30:23.221 "name": null, 00:30:23.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.221 "is_configured": false, 00:30:23.221 "data_offset": 0, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": null, 00:30:23.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.221 "is_configured": false, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": "BaseBdev3", 00:30:23.221 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:23.221 "is_configured": true, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 }, 00:30:23.221 { 00:30:23.221 "name": "BaseBdev4", 00:30:23.221 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:23.221 "is_configured": true, 00:30:23.221 "data_offset": 2048, 00:30:23.221 "data_size": 63488 00:30:23.221 } 00:30:23.221 ] 00:30:23.221 }' 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:23.221 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.791 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:23.791 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.791 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.791 [2024-11-26 17:26:53.695107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:23.791 [2024-11-26 17:26:53.695351] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:23.791 [2024-11-26 17:26:53.695378] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:23.791 [2024-11-26 17:26:53.695423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:23.791 [2024-11-26 17:26:53.710002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:30:23.791 17:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.791 17:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:23.791 [2024-11-26 17:26:53.712438] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:24.728 "name": "raid_bdev1", 00:30:24.728 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:24.728 "strip_size_kb": 0, 00:30:24.728 "state": "online", 00:30:24.728 "raid_level": "raid1", 00:30:24.728 "superblock": true, 00:30:24.728 "num_base_bdevs": 4, 00:30:24.728 "num_base_bdevs_discovered": 3, 00:30:24.728 "num_base_bdevs_operational": 3, 00:30:24.728 "process": { 00:30:24.728 "type": "rebuild", 00:30:24.728 "target": "spare", 00:30:24.728 "progress": { 00:30:24.728 "blocks": 20480, 00:30:24.728 "percent": 32 00:30:24.728 } 00:30:24.728 }, 00:30:24.728 "base_bdevs_list": [ 00:30:24.728 { 00:30:24.728 "name": "spare", 00:30:24.728 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:24.728 "is_configured": true, 00:30:24.728 "data_offset": 2048, 00:30:24.728 "data_size": 63488 00:30:24.728 }, 00:30:24.728 { 00:30:24.728 "name": null, 00:30:24.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.728 "is_configured": false, 00:30:24.728 "data_offset": 2048, 00:30:24.728 "data_size": 63488 00:30:24.728 }, 00:30:24.728 { 00:30:24.728 "name": "BaseBdev3", 00:30:24.728 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:24.728 "is_configured": true, 00:30:24.728 "data_offset": 2048, 00:30:24.728 "data_size": 63488 00:30:24.728 }, 00:30:24.728 { 00:30:24.728 "name": "BaseBdev4", 00:30:24.728 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:24.728 "is_configured": true, 00:30:24.728 "data_offset": 2048, 00:30:24.728 "data_size": 63488 00:30:24.728 } 00:30:24.728 ] 00:30:24.728 }' 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:24.728 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.987 [2024-11-26 17:26:54.868566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:24.987 [2024-11-26 17:26:54.920397] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:24.987 [2024-11-26 17:26:54.920478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.987 [2024-11-26 17:26:54.920502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:24.987 [2024-11-26 17:26:54.920511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.987 "name": "raid_bdev1", 00:30:24.987 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:24.987 "strip_size_kb": 0, 00:30:24.987 "state": "online", 00:30:24.987 "raid_level": "raid1", 00:30:24.987 "superblock": true, 00:30:24.987 "num_base_bdevs": 4, 00:30:24.987 "num_base_bdevs_discovered": 2, 00:30:24.987 "num_base_bdevs_operational": 2, 00:30:24.987 "base_bdevs_list": [ 00:30:24.987 { 00:30:24.987 "name": null, 00:30:24.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.987 "is_configured": false, 00:30:24.987 "data_offset": 0, 00:30:24.987 "data_size": 63488 00:30:24.987 }, 00:30:24.987 { 00:30:24.987 "name": null, 00:30:24.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.987 "is_configured": false, 00:30:24.987 "data_offset": 2048, 00:30:24.987 "data_size": 63488 00:30:24.987 }, 00:30:24.987 { 00:30:24.987 "name": "BaseBdev3", 00:30:24.987 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:24.987 "is_configured": true, 00:30:24.987 "data_offset": 2048, 00:30:24.987 "data_size": 63488 00:30:24.987 }, 00:30:24.987 { 00:30:24.987 "name": "BaseBdev4", 00:30:24.987 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:24.987 "is_configured": true, 00:30:24.987 "data_offset": 2048, 00:30:24.987 "data_size": 63488 00:30:24.987 } 00:30:24.987 ] 00:30:24.987 }' 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.987 17:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.592 17:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:25.592 17:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.592 17:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.592 [2024-11-26 17:26:55.371549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:25.592 [2024-11-26 17:26:55.371626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:25.592 [2024-11-26 17:26:55.371669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:25.592 [2024-11-26 17:26:55.371683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:25.592 [2024-11-26 17:26:55.372257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:25.592 [2024-11-26 17:26:55.372290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:25.592 [2024-11-26 17:26:55.372407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:25.592 [2024-11-26 17:26:55.372422] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:25.592 [2024-11-26 17:26:55.372447] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:25.592 [2024-11-26 17:26:55.372473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:25.592 [2024-11-26 17:26:55.388382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:30:25.592 spare 00:30:25.592 17:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.592 17:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:25.592 [2024-11-26 17:26:55.390832] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.549 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:26.549 "name": "raid_bdev1", 00:30:26.549 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:26.549 "strip_size_kb": 0, 00:30:26.549 "state": "online", 00:30:26.549 "raid_level": "raid1", 00:30:26.549 "superblock": true, 00:30:26.549 "num_base_bdevs": 4, 00:30:26.549 "num_base_bdevs_discovered": 3, 00:30:26.549 "num_base_bdevs_operational": 3, 00:30:26.549 "process": { 00:30:26.549 "type": "rebuild", 00:30:26.549 "target": "spare", 00:30:26.549 "progress": { 00:30:26.549 "blocks": 20480, 00:30:26.549 "percent": 32 00:30:26.549 } 00:30:26.549 }, 00:30:26.549 "base_bdevs_list": [ 00:30:26.549 { 00:30:26.549 "name": "spare", 00:30:26.549 "uuid": "364d1453-d82e-5a0c-848b-3ff45bd63be7", 00:30:26.549 "is_configured": true, 00:30:26.549 "data_offset": 2048, 00:30:26.549 "data_size": 63488 00:30:26.549 }, 00:30:26.550 { 00:30:26.550 "name": null, 00:30:26.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.550 "is_configured": false, 00:30:26.550 "data_offset": 2048, 00:30:26.550 "data_size": 63488 00:30:26.550 }, 00:30:26.550 { 00:30:26.550 "name": "BaseBdev3", 00:30:26.550 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:26.550 "is_configured": true, 00:30:26.550 "data_offset": 2048, 00:30:26.550 "data_size": 63488 00:30:26.550 }, 00:30:26.550 { 00:30:26.550 "name": "BaseBdev4", 00:30:26.550 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:26.550 "is_configured": true, 00:30:26.550 "data_offset": 2048, 00:30:26.550 "data_size": 63488 00:30:26.550 } 00:30:26.550 ] 00:30:26.550 }' 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:26.550 [2024-11-26 17:26:56.554563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:26.550 [2024-11-26 17:26:56.599201] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:26.550 [2024-11-26 17:26:56.599276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.550 [2024-11-26 17:26:56.599295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:26.550 [2024-11-26 17:26:56.599307] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:26.550 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.808 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.808 "name": "raid_bdev1", 00:30:26.808 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:26.808 "strip_size_kb": 0, 00:30:26.808 "state": "online", 00:30:26.808 "raid_level": "raid1", 00:30:26.808 "superblock": true, 00:30:26.808 "num_base_bdevs": 4, 00:30:26.809 "num_base_bdevs_discovered": 2, 00:30:26.809 "num_base_bdevs_operational": 2, 00:30:26.809 "base_bdevs_list": [ 00:30:26.809 { 00:30:26.809 "name": null, 00:30:26.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.809 "is_configured": false, 00:30:26.809 "data_offset": 0, 00:30:26.809 "data_size": 63488 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": null, 00:30:26.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.809 "is_configured": false, 00:30:26.809 "data_offset": 2048, 00:30:26.809 "data_size": 63488 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": "BaseBdev3", 00:30:26.809 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 2048, 00:30:26.809 "data_size": 63488 00:30:26.809 }, 00:30:26.809 { 00:30:26.809 "name": "BaseBdev4", 00:30:26.809 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:26.809 "is_configured": true, 00:30:26.809 "data_offset": 2048, 00:30:26.809 "data_size": 63488 00:30:26.809 } 00:30:26.809 ] 00:30:26.809 }' 00:30:26.809 17:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.809 17:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:27.068 "name": "raid_bdev1", 00:30:27.068 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:27.068 "strip_size_kb": 0, 00:30:27.068 "state": "online", 00:30:27.068 "raid_level": "raid1", 00:30:27.068 "superblock": true, 00:30:27.068 "num_base_bdevs": 4, 00:30:27.068 "num_base_bdevs_discovered": 2, 00:30:27.068 "num_base_bdevs_operational": 2, 00:30:27.068 "base_bdevs_list": [ 00:30:27.068 { 00:30:27.068 "name": null, 00:30:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.068 "is_configured": false, 00:30:27.068 "data_offset": 0, 00:30:27.068 "data_size": 63488 00:30:27.068 }, 00:30:27.068 { 00:30:27.068 "name": null, 00:30:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.068 "is_configured": false, 00:30:27.068 "data_offset": 2048, 00:30:27.068 "data_size": 63488 00:30:27.068 }, 00:30:27.068 { 00:30:27.068 "name": "BaseBdev3", 00:30:27.068 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:27.068 "is_configured": true, 00:30:27.068 "data_offset": 2048, 00:30:27.068 "data_size": 63488 00:30:27.068 }, 00:30:27.068 { 00:30:27.068 "name": "BaseBdev4", 00:30:27.068 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:27.068 "is_configured": true, 00:30:27.068 "data_offset": 2048, 00:30:27.068 "data_size": 63488 00:30:27.068 } 00:30:27.068 ] 00:30:27.068 }' 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:27.068 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.327 [2024-11-26 17:26:57.213898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:27.327 [2024-11-26 17:26:57.213970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:27.327 [2024-11-26 17:26:57.213996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:27.327 [2024-11-26 17:26:57.214011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:27.327 [2024-11-26 17:26:57.214592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:27.327 [2024-11-26 17:26:57.214625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:27.327 [2024-11-26 17:26:57.214723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:27.327 [2024-11-26 17:26:57.214742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:27.327 [2024-11-26 17:26:57.214753] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:27.327 [2024-11-26 17:26:57.214784] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:27.327 BaseBdev1 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.327 17:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.267 "name": "raid_bdev1", 00:30:28.267 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:28.267 "strip_size_kb": 0, 00:30:28.267 "state": "online", 00:30:28.267 "raid_level": "raid1", 00:30:28.267 "superblock": true, 00:30:28.267 "num_base_bdevs": 4, 00:30:28.267 "num_base_bdevs_discovered": 2, 00:30:28.267 "num_base_bdevs_operational": 2, 00:30:28.267 "base_bdevs_list": [ 00:30:28.267 { 00:30:28.267 "name": null, 00:30:28.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.267 "is_configured": false, 00:30:28.267 "data_offset": 0, 00:30:28.267 "data_size": 63488 00:30:28.267 }, 00:30:28.267 { 00:30:28.267 "name": null, 00:30:28.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.267 "is_configured": false, 00:30:28.267 "data_offset": 2048, 00:30:28.267 "data_size": 63488 00:30:28.267 }, 00:30:28.267 { 00:30:28.267 "name": "BaseBdev3", 00:30:28.267 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:28.267 "is_configured": true, 00:30:28.267 "data_offset": 2048, 00:30:28.267 "data_size": 63488 00:30:28.267 }, 00:30:28.267 { 00:30:28.267 "name": "BaseBdev4", 00:30:28.267 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:28.267 "is_configured": true, 00:30:28.267 "data_offset": 2048, 00:30:28.267 "data_size": 63488 00:30:28.267 } 00:30:28.267 ] 00:30:28.267 }' 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.267 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:28.836 "name": "raid_bdev1", 00:30:28.836 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:28.836 "strip_size_kb": 0, 00:30:28.836 "state": "online", 00:30:28.836 "raid_level": "raid1", 00:30:28.836 "superblock": true, 00:30:28.836 "num_base_bdevs": 4, 00:30:28.836 "num_base_bdevs_discovered": 2, 00:30:28.836 "num_base_bdevs_operational": 2, 00:30:28.836 "base_bdevs_list": [ 00:30:28.836 { 00:30:28.836 "name": null, 00:30:28.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.836 "is_configured": false, 00:30:28.836 "data_offset": 0, 00:30:28.836 "data_size": 63488 00:30:28.836 }, 00:30:28.836 { 00:30:28.836 "name": null, 00:30:28.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.836 "is_configured": false, 00:30:28.836 "data_offset": 2048, 00:30:28.836 "data_size": 63488 00:30:28.836 }, 00:30:28.836 { 00:30:28.836 "name": "BaseBdev3", 00:30:28.836 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:28.836 "is_configured": true, 00:30:28.836 "data_offset": 2048, 00:30:28.836 "data_size": 63488 00:30:28.836 }, 00:30:28.836 { 00:30:28.836 "name": "BaseBdev4", 00:30:28.836 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:28.836 "is_configured": true, 00:30:28.836 "data_offset": 2048, 00:30:28.836 "data_size": 63488 00:30:28.836 } 00:30:28.836 ] 00:30:28.836 }' 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 [2024-11-26 17:26:58.813744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:28.836 [2024-11-26 17:26:58.813988] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:28.836 [2024-11-26 17:26:58.814005] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:28.836 request: 00:30:28.836 { 00:30:28.836 "base_bdev": "BaseBdev1", 00:30:28.836 "raid_bdev": "raid_bdev1", 00:30:28.836 "method": "bdev_raid_add_base_bdev", 00:30:28.836 "req_id": 1 00:30:28.836 } 00:30:28.836 Got JSON-RPC error response 00:30:28.836 response: 00:30:28.836 { 00:30:28.836 "code": -22, 00:30:28.836 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:28.836 } 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:28.836 17:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.772 "name": "raid_bdev1", 00:30:29.772 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:29.772 "strip_size_kb": 0, 00:30:29.772 "state": "online", 00:30:29.772 "raid_level": "raid1", 00:30:29.772 "superblock": true, 00:30:29.772 "num_base_bdevs": 4, 00:30:29.772 "num_base_bdevs_discovered": 2, 00:30:29.772 "num_base_bdevs_operational": 2, 00:30:29.772 "base_bdevs_list": [ 00:30:29.772 { 00:30:29.772 "name": null, 00:30:29.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.772 "is_configured": false, 00:30:29.772 "data_offset": 0, 00:30:29.772 "data_size": 63488 00:30:29.772 }, 00:30:29.772 { 00:30:29.772 "name": null, 00:30:29.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.772 "is_configured": false, 00:30:29.772 "data_offset": 2048, 00:30:29.772 "data_size": 63488 00:30:29.772 }, 00:30:29.772 { 00:30:29.772 "name": "BaseBdev3", 00:30:29.772 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:29.772 "is_configured": true, 00:30:29.772 "data_offset": 2048, 00:30:29.772 "data_size": 63488 00:30:29.772 }, 00:30:29.772 { 00:30:29.772 "name": "BaseBdev4", 00:30:29.772 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:29.772 "is_configured": true, 00:30:29.772 "data_offset": 2048, 00:30:29.772 "data_size": 63488 00:30:29.772 } 00:30:29.772 ] 00:30:29.772 }' 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.772 17:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:30.340 "name": "raid_bdev1", 00:30:30.340 "uuid": "f6694f48-1d36-4888-a5e0-8720c8ab5ca3", 00:30:30.340 "strip_size_kb": 0, 00:30:30.340 "state": "online", 00:30:30.340 "raid_level": "raid1", 00:30:30.340 "superblock": true, 00:30:30.340 "num_base_bdevs": 4, 00:30:30.340 "num_base_bdevs_discovered": 2, 00:30:30.340 "num_base_bdevs_operational": 2, 00:30:30.340 "base_bdevs_list": [ 00:30:30.340 { 00:30:30.340 "name": null, 00:30:30.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.340 "is_configured": false, 00:30:30.340 "data_offset": 0, 00:30:30.340 "data_size": 63488 00:30:30.340 }, 00:30:30.340 { 00:30:30.340 "name": null, 00:30:30.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.340 "is_configured": false, 00:30:30.340 "data_offset": 2048, 00:30:30.340 "data_size": 63488 00:30:30.340 }, 00:30:30.340 { 00:30:30.340 "name": "BaseBdev3", 00:30:30.340 "uuid": "ca7e6e75-6359-5bff-bc82-045f8b936fa0", 00:30:30.340 "is_configured": true, 00:30:30.340 "data_offset": 2048, 00:30:30.340 "data_size": 63488 00:30:30.340 }, 00:30:30.340 { 00:30:30.340 "name": "BaseBdev4", 00:30:30.340 "uuid": "0ca1c095-a8bf-50da-a0d7-19a36314cf2a", 00:30:30.340 "is_configured": true, 00:30:30.340 "data_offset": 2048, 00:30:30.340 "data_size": 63488 00:30:30.340 } 00:30:30.340 ] 00:30:30.340 }' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78136 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78136 ']' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78136 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.340 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78136 00:30:30.599 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.599 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.599 killing process with pid 78136 00:30:30.599 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78136' 00:30:30.599 Received shutdown signal, test time was about 60.000000 seconds 00:30:30.599 00:30:30.599 Latency(us) 00:30:30.599 [2024-11-26T17:27:00.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.599 [2024-11-26T17:27:00.713Z] =================================================================================================================== 00:30:30.599 [2024-11-26T17:27:00.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:30.599 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78136 00:30:30.599 [2024-11-26 17:27:00.470370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:30.599 17:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78136 00:30:30.599 [2024-11-26 17:27:00.470520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.599 [2024-11-26 17:27:00.470616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.599 [2024-11-26 17:27:00.470630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:31.167 [2024-11-26 17:27:00.993644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:32.104 17:27:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:30:32.104 00:30:32.104 real 0m26.767s 00:30:32.104 user 0m31.437s 00:30:32.104 sys 0m5.049s 00:30:32.104 17:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.104 17:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.104 ************************************ 00:30:32.104 END TEST raid_rebuild_test_sb 00:30:32.104 ************************************ 00:30:32.363 17:27:02 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:32.364 17:27:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:32.364 17:27:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.364 17:27:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.364 ************************************ 00:30:32.364 START TEST raid_rebuild_test_io 00:30:32.364 ************************************ 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78907 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78907 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78907 ']' 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.364 17:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:32.364 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:32.364 Zero copy mechanism will not be used. 00:30:32.364 [2024-11-26 17:27:02.383707] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:30:32.364 [2024-11-26 17:27:02.383849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78907 ] 00:30:32.623 [2024-11-26 17:27:02.559827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.623 [2024-11-26 17:27:02.702466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.881 [2024-11-26 17:27:02.931826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.881 [2024-11-26 17:27:02.931907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.141 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 BaseBdev1_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 [2024-11-26 17:27:03.286665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:33.400 [2024-11-26 17:27:03.286753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.400 [2024-11-26 17:27:03.286780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:33.400 [2024-11-26 17:27:03.286796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.400 [2024-11-26 17:27:03.289442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.400 [2024-11-26 17:27:03.289491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:33.400 BaseBdev1 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 BaseBdev2_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 [2024-11-26 17:27:03.351491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:33.400 [2024-11-26 17:27:03.351610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.400 [2024-11-26 17:27:03.351646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:33.400 [2024-11-26 17:27:03.351663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.400 [2024-11-26 17:27:03.354488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.400 [2024-11-26 17:27:03.354560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:33.400 BaseBdev2 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 BaseBdev3_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 [2024-11-26 17:27:03.425120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:33.400 [2024-11-26 17:27:03.425200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.400 [2024-11-26 17:27:03.425228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:33.400 [2024-11-26 17:27:03.425244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.400 [2024-11-26 17:27:03.427981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.400 [2024-11-26 17:27:03.428031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:33.400 BaseBdev3 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 BaseBdev4_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.400 [2024-11-26 17:27:03.488054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:33.400 [2024-11-26 17:27:03.488293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.400 [2024-11-26 17:27:03.488358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:33.400 [2024-11-26 17:27:03.488437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.400 [2024-11-26 17:27:03.491220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.400 [2024-11-26 17:27:03.491388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:33.400 BaseBdev4 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.400 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 spare_malloc 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 spare_delay 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 [2024-11-26 17:27:03.567845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:33.660 [2024-11-26 17:27:03.568039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.660 [2024-11-26 17:27:03.568100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:33.660 [2024-11-26 17:27:03.568184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.660 [2024-11-26 17:27:03.570872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.660 [2024-11-26 17:27:03.571060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:33.660 spare 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.660 [2024-11-26 17:27:03.580034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:33.660 [2024-11-26 17:27:03.582533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:33.660 [2024-11-26 17:27:03.582758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:33.660 [2024-11-26 17:27:03.582875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:33.660 [2024-11-26 17:27:03.583068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:33.660 [2024-11-26 17:27:03.583186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:33.660 [2024-11-26 17:27:03.583598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:33.660 [2024-11-26 17:27:03.583908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:33.660 [2024-11-26 17:27:03.584013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:33.660 [2024-11-26 17:27:03.584336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.660 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.661 "name": "raid_bdev1", 00:30:33.661 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:33.661 "strip_size_kb": 0, 00:30:33.661 "state": "online", 00:30:33.661 "raid_level": "raid1", 00:30:33.661 "superblock": false, 00:30:33.661 "num_base_bdevs": 4, 00:30:33.661 "num_base_bdevs_discovered": 4, 00:30:33.661 "num_base_bdevs_operational": 4, 00:30:33.661 "base_bdevs_list": [ 00:30:33.661 { 00:30:33.661 "name": "BaseBdev1", 00:30:33.661 "uuid": "53f2eb21-6349-569b-8699-2782aaf8e5fa", 00:30:33.661 "is_configured": true, 00:30:33.661 "data_offset": 0, 00:30:33.661 "data_size": 65536 00:30:33.661 }, 00:30:33.661 { 00:30:33.661 "name": "BaseBdev2", 00:30:33.661 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:33.661 "is_configured": true, 00:30:33.661 "data_offset": 0, 00:30:33.661 "data_size": 65536 00:30:33.661 }, 00:30:33.661 { 00:30:33.661 "name": "BaseBdev3", 00:30:33.661 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:33.661 "is_configured": true, 00:30:33.661 "data_offset": 0, 00:30:33.661 "data_size": 65536 00:30:33.661 }, 00:30:33.661 { 00:30:33.661 "name": "BaseBdev4", 00:30:33.661 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:33.661 "is_configured": true, 00:30:33.661 "data_offset": 0, 00:30:33.661 "data_size": 65536 00:30:33.661 } 00:30:33.661 ] 00:30:33.661 }' 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.661 17:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.919 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:33.919 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:33.919 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.919 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.919 [2024-11-26 17:27:04.032071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.178 [2024-11-26 17:27:04.115656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.178 "name": "raid_bdev1", 00:30:34.178 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:34.178 "strip_size_kb": 0, 00:30:34.178 "state": "online", 00:30:34.178 "raid_level": "raid1", 00:30:34.178 "superblock": false, 00:30:34.178 "num_base_bdevs": 4, 00:30:34.178 "num_base_bdevs_discovered": 3, 00:30:34.178 "num_base_bdevs_operational": 3, 00:30:34.178 "base_bdevs_list": [ 00:30:34.178 { 00:30:34.178 "name": null, 00:30:34.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.178 "is_configured": false, 00:30:34.178 "data_offset": 0, 00:30:34.178 "data_size": 65536 00:30:34.178 }, 00:30:34.178 { 00:30:34.178 "name": "BaseBdev2", 00:30:34.178 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:34.178 "is_configured": true, 00:30:34.178 "data_offset": 0, 00:30:34.178 "data_size": 65536 00:30:34.178 }, 00:30:34.178 { 00:30:34.178 "name": "BaseBdev3", 00:30:34.178 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:34.178 "is_configured": true, 00:30:34.178 "data_offset": 0, 00:30:34.178 "data_size": 65536 00:30:34.178 }, 00:30:34.178 { 00:30:34.178 "name": "BaseBdev4", 00:30:34.178 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:34.178 "is_configured": true, 00:30:34.178 "data_offset": 0, 00:30:34.178 "data_size": 65536 00:30:34.178 } 00:30:34.178 ] 00:30:34.178 }' 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.178 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.178 [2024-11-26 17:27:04.237175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:34.178 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:34.178 Zero copy mechanism will not be used. 00:30:34.178 Running I/O for 60 seconds... 00:30:34.745 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:34.745 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.745 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 [2024-11-26 17:27:04.585164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:34.745 17:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.745 17:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:34.745 [2024-11-26 17:27:04.665690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:34.745 [2024-11-26 17:27:04.668157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:34.745 [2024-11-26 17:27:04.776757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:34.745 [2024-11-26 17:27:04.777636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:35.004 [2024-11-26 17:27:04.991891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:35.004 [2024-11-26 17:27:04.992989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:35.263 151.00 IOPS, 453.00 MiB/s [2024-11-26T17:27:05.377Z] [2024-11-26 17:27:05.346886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:35.263 [2024-11-26 17:27:05.347298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:35.520 [2024-11-26 17:27:05.465433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.780 [2024-11-26 17:27:05.682532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:35.780 "name": "raid_bdev1", 00:30:35.780 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:35.780 "strip_size_kb": 0, 00:30:35.780 "state": "online", 00:30:35.780 "raid_level": "raid1", 00:30:35.780 "superblock": false, 00:30:35.780 "num_base_bdevs": 4, 00:30:35.780 "num_base_bdevs_discovered": 4, 00:30:35.780 "num_base_bdevs_operational": 4, 00:30:35.780 "process": { 00:30:35.780 "type": "rebuild", 00:30:35.780 "target": "spare", 00:30:35.780 "progress": { 00:30:35.780 "blocks": 12288, 00:30:35.780 "percent": 18 00:30:35.780 } 00:30:35.780 }, 00:30:35.780 "base_bdevs_list": [ 00:30:35.780 { 00:30:35.780 "name": "spare", 00:30:35.780 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:35.780 "is_configured": true, 00:30:35.780 "data_offset": 0, 00:30:35.780 "data_size": 65536 00:30:35.780 }, 00:30:35.780 { 00:30:35.780 "name": "BaseBdev2", 00:30:35.780 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:35.780 "is_configured": true, 00:30:35.780 "data_offset": 0, 00:30:35.780 "data_size": 65536 00:30:35.780 }, 00:30:35.780 { 00:30:35.780 "name": "BaseBdev3", 00:30:35.780 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:35.780 "is_configured": true, 00:30:35.780 "data_offset": 0, 00:30:35.780 "data_size": 65536 00:30:35.780 }, 00:30:35.780 { 00:30:35.780 "name": "BaseBdev4", 00:30:35.780 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:35.780 "is_configured": true, 00:30:35.780 "data_offset": 0, 00:30:35.780 "data_size": 65536 00:30:35.780 } 00:30:35.780 ] 00:30:35.780 }' 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.780 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:35.780 [2024-11-26 17:27:05.774012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.780 [2024-11-26 17:27:05.813789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:36.039 [2024-11-26 17:27:05.922696] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:36.039 [2024-11-26 17:27:05.943360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.039 [2024-11-26 17:27:05.943458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:36.039 [2024-11-26 17:27:05.943477] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:36.039 [2024-11-26 17:27:05.973430] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.039 17:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.039 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.039 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.039 "name": "raid_bdev1", 00:30:36.039 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:36.039 "strip_size_kb": 0, 00:30:36.039 "state": "online", 00:30:36.039 "raid_level": "raid1", 00:30:36.039 "superblock": false, 00:30:36.039 "num_base_bdevs": 4, 00:30:36.039 "num_base_bdevs_discovered": 3, 00:30:36.039 "num_base_bdevs_operational": 3, 00:30:36.039 "base_bdevs_list": [ 00:30:36.039 { 00:30:36.039 "name": null, 00:30:36.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.039 "is_configured": false, 00:30:36.039 "data_offset": 0, 00:30:36.039 "data_size": 65536 00:30:36.039 }, 00:30:36.039 { 00:30:36.039 "name": "BaseBdev2", 00:30:36.039 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:36.039 "is_configured": true, 00:30:36.039 "data_offset": 0, 00:30:36.039 "data_size": 65536 00:30:36.039 }, 00:30:36.039 { 00:30:36.039 "name": "BaseBdev3", 00:30:36.039 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:36.039 "is_configured": true, 00:30:36.039 "data_offset": 0, 00:30:36.039 "data_size": 65536 00:30:36.039 }, 00:30:36.039 { 00:30:36.039 "name": "BaseBdev4", 00:30:36.039 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:36.039 "is_configured": true, 00:30:36.039 "data_offset": 0, 00:30:36.039 "data_size": 65536 00:30:36.039 } 00:30:36.039 ] 00:30:36.039 }' 00:30:36.039 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.039 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.557 137.50 IOPS, 412.50 MiB/s [2024-11-26T17:27:06.671Z] 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:36.557 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:36.557 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:36.557 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:36.557 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:36.557 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:36.558 "name": "raid_bdev1", 00:30:36.558 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:36.558 "strip_size_kb": 0, 00:30:36.558 "state": "online", 00:30:36.558 "raid_level": "raid1", 00:30:36.558 "superblock": false, 00:30:36.558 "num_base_bdevs": 4, 00:30:36.558 "num_base_bdevs_discovered": 3, 00:30:36.558 "num_base_bdevs_operational": 3, 00:30:36.558 "base_bdevs_list": [ 00:30:36.558 { 00:30:36.558 "name": null, 00:30:36.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.558 "is_configured": false, 00:30:36.558 "data_offset": 0, 00:30:36.558 "data_size": 65536 00:30:36.558 }, 00:30:36.558 { 00:30:36.558 "name": "BaseBdev2", 00:30:36.558 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:36.558 "is_configured": true, 00:30:36.558 "data_offset": 0, 00:30:36.558 "data_size": 65536 00:30:36.558 }, 00:30:36.558 { 00:30:36.558 "name": "BaseBdev3", 00:30:36.558 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:36.558 "is_configured": true, 00:30:36.558 "data_offset": 0, 00:30:36.558 "data_size": 65536 00:30:36.558 }, 00:30:36.558 { 00:30:36.558 "name": "BaseBdev4", 00:30:36.558 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:36.558 "is_configured": true, 00:30:36.558 "data_offset": 0, 00:30:36.558 "data_size": 65536 00:30:36.558 } 00:30:36.558 ] 00:30:36.558 }' 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.558 [2024-11-26 17:27:06.592467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.558 17:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:36.558 [2024-11-26 17:27:06.665190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:36.558 [2024-11-26 17:27:06.667702] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:36.817 [2024-11-26 17:27:06.784662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:36.817 [2024-11-26 17:27:06.786625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:37.076 [2024-11-26 17:27:07.011655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:37.076 [2024-11-26 17:27:07.012736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:37.336 155.33 IOPS, 466.00 MiB/s [2024-11-26T17:27:07.450Z] [2024-11-26 17:27:07.347274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:37.336 [2024-11-26 17:27:07.349228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:37.594 [2024-11-26 17:27:07.585056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:37.594 "name": "raid_bdev1", 00:30:37.594 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:37.594 "strip_size_kb": 0, 00:30:37.594 "state": "online", 00:30:37.594 "raid_level": "raid1", 00:30:37.594 "superblock": false, 00:30:37.594 "num_base_bdevs": 4, 00:30:37.594 "num_base_bdevs_discovered": 4, 00:30:37.594 "num_base_bdevs_operational": 4, 00:30:37.594 "process": { 00:30:37.594 "type": "rebuild", 00:30:37.594 "target": "spare", 00:30:37.594 "progress": { 00:30:37.594 "blocks": 10240, 00:30:37.594 "percent": 15 00:30:37.594 } 00:30:37.594 }, 00:30:37.594 "base_bdevs_list": [ 00:30:37.594 { 00:30:37.594 "name": "spare", 00:30:37.594 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:37.594 "is_configured": true, 00:30:37.594 "data_offset": 0, 00:30:37.594 "data_size": 65536 00:30:37.594 }, 00:30:37.594 { 00:30:37.594 "name": "BaseBdev2", 00:30:37.594 "uuid": "43fe2fa5-d94a-5c0d-b7b9-a6bb7984fb41", 00:30:37.594 "is_configured": true, 00:30:37.594 "data_offset": 0, 00:30:37.594 "data_size": 65536 00:30:37.594 }, 00:30:37.594 { 00:30:37.594 "name": "BaseBdev3", 00:30:37.594 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:37.594 "is_configured": true, 00:30:37.594 "data_offset": 0, 00:30:37.594 "data_size": 65536 00:30:37.594 }, 00:30:37.594 { 00:30:37.594 "name": "BaseBdev4", 00:30:37.594 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:37.594 "is_configured": true, 00:30:37.594 "data_offset": 0, 00:30:37.594 "data_size": 65536 00:30:37.594 } 00:30:37.594 ] 00:30:37.594 }' 00:30:37.594 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.854 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.854 [2024-11-26 17:27:07.785494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:37.854 [2024-11-26 17:27:07.857413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:37.854 [2024-11-26 17:27:07.963114] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:30:37.854 [2024-11-26 17:27:07.963185] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.113 17:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:38.113 "name": "raid_bdev1", 00:30:38.113 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:38.113 "strip_size_kb": 0, 00:30:38.113 "state": "online", 00:30:38.113 "raid_level": "raid1", 00:30:38.113 "superblock": false, 00:30:38.113 "num_base_bdevs": 4, 00:30:38.113 "num_base_bdevs_discovered": 3, 00:30:38.113 "num_base_bdevs_operational": 3, 00:30:38.113 "process": { 00:30:38.113 "type": "rebuild", 00:30:38.113 "target": "spare", 00:30:38.113 "progress": { 00:30:38.113 "blocks": 14336, 00:30:38.113 "percent": 21 00:30:38.113 } 00:30:38.113 }, 00:30:38.113 "base_bdevs_list": [ 00:30:38.113 { 00:30:38.113 "name": "spare", 00:30:38.113 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:38.113 "is_configured": true, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": null, 00:30:38.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.113 "is_configured": false, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": "BaseBdev3", 00:30:38.113 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:38.113 "is_configured": true, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": "BaseBdev4", 00:30:38.113 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:38.113 "is_configured": true, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 } 00:30:38.113 ] 00:30:38.113 }' 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:38.113 [2024-11-26 17:27:08.091078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.113 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:38.113 "name": "raid_bdev1", 00:30:38.113 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:38.113 "strip_size_kb": 0, 00:30:38.113 "state": "online", 00:30:38.113 "raid_level": "raid1", 00:30:38.113 "superblock": false, 00:30:38.113 "num_base_bdevs": 4, 00:30:38.113 "num_base_bdevs_discovered": 3, 00:30:38.113 "num_base_bdevs_operational": 3, 00:30:38.113 "process": { 00:30:38.113 "type": "rebuild", 00:30:38.113 "target": "spare", 00:30:38.113 "progress": { 00:30:38.113 "blocks": 16384, 00:30:38.113 "percent": 25 00:30:38.113 } 00:30:38.113 }, 00:30:38.113 "base_bdevs_list": [ 00:30:38.113 { 00:30:38.113 "name": "spare", 00:30:38.113 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:38.113 "is_configured": true, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": null, 00:30:38.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.113 "is_configured": false, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": "BaseBdev3", 00:30:38.113 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:38.113 "is_configured": true, 00:30:38.113 "data_offset": 0, 00:30:38.113 "data_size": 65536 00:30:38.113 }, 00:30:38.113 { 00:30:38.113 "name": "BaseBdev4", 00:30:38.113 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:38.114 "is_configured": true, 00:30:38.114 "data_offset": 0, 00:30:38.114 "data_size": 65536 00:30:38.114 } 00:30:38.114 ] 00:30:38.114 }' 00:30:38.114 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:38.114 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:38.114 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:38.373 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:38.373 17:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:38.373 140.25 IOPS, 420.75 MiB/s [2024-11-26T17:27:08.487Z] [2024-11-26 17:27:08.357348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:38.373 [2024-11-26 17:27:08.363922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:38.633 [2024-11-26 17:27:08.591465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:38.633 [2024-11-26 17:27:08.592436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:38.891 [2024-11-26 17:27:08.931907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:39.150 [2024-11-26 17:27:09.049737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:39.150 [2024-11-26 17:27:09.050451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:39.150 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.418 122.40 IOPS, 367.20 MiB/s [2024-11-26T17:27:09.532Z] 17:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:39.418 "name": "raid_bdev1", 00:30:39.418 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:39.418 "strip_size_kb": 0, 00:30:39.418 "state": "online", 00:30:39.418 "raid_level": "raid1", 00:30:39.418 "superblock": false, 00:30:39.418 "num_base_bdevs": 4, 00:30:39.418 "num_base_bdevs_discovered": 3, 00:30:39.418 "num_base_bdevs_operational": 3, 00:30:39.418 "process": { 00:30:39.418 "type": "rebuild", 00:30:39.418 "target": "spare", 00:30:39.418 "progress": { 00:30:39.418 "blocks": 30720, 00:30:39.418 "percent": 46 00:30:39.418 } 00:30:39.418 }, 00:30:39.418 "base_bdevs_list": [ 00:30:39.418 { 00:30:39.418 "name": "spare", 00:30:39.418 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:39.418 "is_configured": true, 00:30:39.418 "data_offset": 0, 00:30:39.418 "data_size": 65536 00:30:39.418 }, 00:30:39.418 { 00:30:39.418 "name": null, 00:30:39.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.418 "is_configured": false, 00:30:39.418 "data_offset": 0, 00:30:39.418 "data_size": 65536 00:30:39.418 }, 00:30:39.418 { 00:30:39.418 "name": "BaseBdev3", 00:30:39.418 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:39.418 "is_configured": true, 00:30:39.418 "data_offset": 0, 00:30:39.418 "data_size": 65536 00:30:39.418 }, 00:30:39.418 { 00:30:39.418 "name": "BaseBdev4", 00:30:39.418 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:39.418 "is_configured": true, 00:30:39.418 "data_offset": 0, 00:30:39.418 "data_size": 65536 00:30:39.418 } 00:30:39.418 ] 00:30:39.418 }' 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:39.418 17:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:39.418 [2024-11-26 17:27:09.426748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:39.418 [2024-11-26 17:27:09.427488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:39.687 [2024-11-26 17:27:09.785098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:40.516 110.50 IOPS, 331.50 MiB/s [2024-11-26T17:27:10.630Z] 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:40.516 "name": "raid_bdev1", 00:30:40.516 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:40.516 "strip_size_kb": 0, 00:30:40.516 "state": "online", 00:30:40.516 "raid_level": "raid1", 00:30:40.516 "superblock": false, 00:30:40.516 "num_base_bdevs": 4, 00:30:40.516 "num_base_bdevs_discovered": 3, 00:30:40.516 "num_base_bdevs_operational": 3, 00:30:40.516 "process": { 00:30:40.516 "type": "rebuild", 00:30:40.516 "target": "spare", 00:30:40.516 "progress": { 00:30:40.516 "blocks": 49152, 00:30:40.516 "percent": 75 00:30:40.516 } 00:30:40.516 }, 00:30:40.516 "base_bdevs_list": [ 00:30:40.516 { 00:30:40.516 "name": "spare", 00:30:40.516 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:40.516 "is_configured": true, 00:30:40.516 "data_offset": 0, 00:30:40.516 "data_size": 65536 00:30:40.516 }, 00:30:40.516 { 00:30:40.516 "name": null, 00:30:40.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.516 "is_configured": false, 00:30:40.516 "data_offset": 0, 00:30:40.516 "data_size": 65536 00:30:40.516 }, 00:30:40.516 { 00:30:40.516 "name": "BaseBdev3", 00:30:40.516 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:40.516 "is_configured": true, 00:30:40.516 "data_offset": 0, 00:30:40.516 "data_size": 65536 00:30:40.516 }, 00:30:40.516 { 00:30:40.516 "name": "BaseBdev4", 00:30:40.516 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:40.516 "is_configured": true, 00:30:40.516 "data_offset": 0, 00:30:40.516 "data_size": 65536 00:30:40.516 } 00:30:40.516 ] 00:30:40.516 }' 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:40.516 17:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:41.083 [2024-11-26 17:27:10.923229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:41.342 [2024-11-26 17:27:11.262375] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:41.342 100.00 IOPS, 300.00 MiB/s [2024-11-26T17:27:11.456Z] [2024-11-26 17:27:11.367999] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:41.342 [2024-11-26 17:27:11.372172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.602 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:41.602 "name": "raid_bdev1", 00:30:41.602 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:41.602 "strip_size_kb": 0, 00:30:41.602 "state": "online", 00:30:41.602 "raid_level": "raid1", 00:30:41.602 "superblock": false, 00:30:41.602 "num_base_bdevs": 4, 00:30:41.602 "num_base_bdevs_discovered": 3, 00:30:41.602 "num_base_bdevs_operational": 3, 00:30:41.602 "base_bdevs_list": [ 00:30:41.602 { 00:30:41.602 "name": "spare", 00:30:41.602 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:41.602 "is_configured": true, 00:30:41.603 "data_offset": 0, 00:30:41.603 "data_size": 65536 00:30:41.603 }, 00:30:41.603 { 00:30:41.603 "name": null, 00:30:41.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.603 "is_configured": false, 00:30:41.603 "data_offset": 0, 00:30:41.603 "data_size": 65536 00:30:41.603 }, 00:30:41.603 { 00:30:41.603 "name": "BaseBdev3", 00:30:41.603 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:41.603 "is_configured": true, 00:30:41.603 "data_offset": 0, 00:30:41.603 "data_size": 65536 00:30:41.603 }, 00:30:41.603 { 00:30:41.603 "name": "BaseBdev4", 00:30:41.603 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:41.603 "is_configured": true, 00:30:41.603 "data_offset": 0, 00:30:41.603 "data_size": 65536 00:30:41.603 } 00:30:41.603 ] 00:30:41.603 }' 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.603 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:41.863 "name": "raid_bdev1", 00:30:41.863 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:41.863 "strip_size_kb": 0, 00:30:41.863 "state": "online", 00:30:41.863 "raid_level": "raid1", 00:30:41.863 "superblock": false, 00:30:41.863 "num_base_bdevs": 4, 00:30:41.863 "num_base_bdevs_discovered": 3, 00:30:41.863 "num_base_bdevs_operational": 3, 00:30:41.863 "base_bdevs_list": [ 00:30:41.863 { 00:30:41.863 "name": "spare", 00:30:41.863 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": null, 00:30:41.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.863 "is_configured": false, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": "BaseBdev3", 00:30:41.863 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": "BaseBdev4", 00:30:41.863 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 } 00:30:41.863 ] 00:30:41.863 }' 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.863 "name": "raid_bdev1", 00:30:41.863 "uuid": "b3267197-d2ea-46f8-a498-a30744730492", 00:30:41.863 "strip_size_kb": 0, 00:30:41.863 "state": "online", 00:30:41.863 "raid_level": "raid1", 00:30:41.863 "superblock": false, 00:30:41.863 "num_base_bdevs": 4, 00:30:41.863 "num_base_bdevs_discovered": 3, 00:30:41.863 "num_base_bdevs_operational": 3, 00:30:41.863 "base_bdevs_list": [ 00:30:41.863 { 00:30:41.863 "name": "spare", 00:30:41.863 "uuid": "46f306e3-416f-5cd6-b27c-f0fe6e9bbb33", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": null, 00:30:41.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.863 "is_configured": false, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": "BaseBdev3", 00:30:41.863 "uuid": "a542323b-ef04-520f-9eb0-1fa3bad66de7", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 }, 00:30:41.863 { 00:30:41.863 "name": "BaseBdev4", 00:30:41.863 "uuid": "df72e05a-f8f1-5471-b04c-bc894125ca17", 00:30:41.863 "is_configured": true, 00:30:41.863 "data_offset": 0, 00:30:41.863 "data_size": 65536 00:30:41.863 } 00:30:41.863 ] 00:30:41.863 }' 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.863 17:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.434 93.25 IOPS, 279.75 MiB/s [2024-11-26T17:27:12.548Z] 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.434 [2024-11-26 17:27:12.312068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:42.434 [2024-11-26 17:27:12.312115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:42.434 00:30:42.434 Latency(us) 00:30:42.434 [2024-11-26T17:27:12.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.434 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:42.434 raid_bdev1 : 8.14 91.92 275.77 0.00 0.00 15112.87 312.55 118754.39 00:30:42.434 [2024-11-26T17:27:12.548Z] =================================================================================================================== 00:30:42.434 [2024-11-26T17:27:12.548Z] Total : 91.92 275.77 0.00 0.00 15112.87 312.55 118754.39 00:30:42.434 [2024-11-26 17:27:12.387669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.434 [2024-11-26 17:27:12.387745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:42.434 [2024-11-26 17:27:12.387866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:42.434 [2024-11-26 17:27:12.387879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:42.434 { 00:30:42.434 "results": [ 00:30:42.434 { 00:30:42.434 "job": "raid_bdev1", 00:30:42.434 "core_mask": "0x1", 00:30:42.434 "workload": "randrw", 00:30:42.434 "percentage": 50, 00:30:42.434 "status": "finished", 00:30:42.434 "queue_depth": 2, 00:30:42.434 "io_size": 3145728, 00:30:42.434 "runtime": 8.137077, 00:30:42.434 "iops": 91.92490128826358, 00:30:42.434 "mibps": 275.77470386479075, 00:30:42.434 "io_failed": 0, 00:30:42.434 "io_timeout": 0, 00:30:42.434 "avg_latency_us": 15112.872555462494, 00:30:42.434 "min_latency_us": 312.54618473895584, 00:30:42.434 "max_latency_us": 118754.39036144578 00:30:42.434 } 00:30:42.434 ], 00:30:42.434 "core_count": 1 00:30:42.434 } 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:42.434 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.435 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:30:42.694 /dev/nbd0 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.694 1+0 records in 00:30:42.694 1+0 records out 00:30:42.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318928 s, 12.8 MB/s 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:30:42.694 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.695 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:42.954 /dev/nbd1 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.955 1+0 records in 00:30:42.955 1+0 records out 00:30:42.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556083 s, 7.4 MB/s 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:42.955 17:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:43.214 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:43.474 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:43.746 /dev/nbd1 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:43.746 1+0 records in 00:30:43.746 1+0 records out 00:30:43.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042517 s, 9.6 MB/s 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:43.746 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.005 17:27:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.005 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:44.263 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.264 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.264 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78907 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78907 ']' 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78907 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78907 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.523 killing process with pid 78907 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78907' 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78907 00:30:44.523 Received shutdown signal, test time was about 10.191518 seconds 00:30:44.523 00:30:44.523 Latency(us) 00:30:44.523 [2024-11-26T17:27:14.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.523 [2024-11-26T17:27:14.637Z] =================================================================================================================== 00:30:44.523 [2024-11-26T17:27:14.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.523 [2024-11-26 17:27:14.415053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:44.523 17:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78907 00:30:44.782 [2024-11-26 17:27:14.875184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:30:46.160 00:30:46.160 real 0m13.911s 00:30:46.160 user 0m17.480s 00:30:46.160 sys 0m2.209s 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.160 ************************************ 00:30:46.160 END TEST raid_rebuild_test_io 00:30:46.160 ************************************ 00:30:46.160 17:27:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:46.160 17:27:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:46.160 17:27:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.160 17:27:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:46.160 ************************************ 00:30:46.160 START TEST raid_rebuild_test_sb_io 00:30:46.160 ************************************ 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:46.160 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:46.161 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79316 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79316 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79316 ']' 00:30:46.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.419 17:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.419 [2024-11-26 17:27:16.377554] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:30:46.419 [2024-11-26 17:27:16.377727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79316 ] 00:30:46.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:46.419 Zero copy mechanism will not be used. 00:30:46.678 [2024-11-26 17:27:16.567282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.678 [2024-11-26 17:27:16.720192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.937 [2024-11-26 17:27:16.966168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:46.937 [2024-11-26 17:27:16.966219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.196 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 BaseBdev1_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 [2024-11-26 17:27:17.317813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:47.456 [2024-11-26 17:27:17.318061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.456 [2024-11-26 17:27:17.318138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:47.456 [2024-11-26 17:27:17.318396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.456 [2024-11-26 17:27:17.321426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.456 [2024-11-26 17:27:17.321604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:47.456 BaseBdev1 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 BaseBdev2_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 [2024-11-26 17:27:17.377404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:47.456 [2024-11-26 17:27:17.377612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.456 [2024-11-26 17:27:17.377695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:47.456 [2024-11-26 17:27:17.377800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.456 [2024-11-26 17:27:17.380493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.456 [2024-11-26 17:27:17.380680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:47.456 BaseBdev2 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 BaseBdev3_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 [2024-11-26 17:27:17.450470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:47.456 [2024-11-26 17:27:17.450583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.456 [2024-11-26 17:27:17.450613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:47.456 [2024-11-26 17:27:17.450630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.456 [2024-11-26 17:27:17.453430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.456 BaseBdev3 00:30:47.456 [2024-11-26 17:27:17.453658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 BaseBdev4_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.456 [2024-11-26 17:27:17.511741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:47.456 [2024-11-26 17:27:17.511955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.456 [2024-11-26 17:27:17.512159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:47.456 [2024-11-26 17:27:17.512281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.456 [2024-11-26 17:27:17.515416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.456 [2024-11-26 17:27:17.515593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:47.456 BaseBdev4 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.456 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.715 spare_malloc 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.715 spare_delay 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.715 [2024-11-26 17:27:17.588396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:47.715 [2024-11-26 17:27:17.588480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.715 [2024-11-26 17:27:17.588508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:47.715 [2024-11-26 17:27:17.588551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.715 [2024-11-26 17:27:17.591404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.715 [2024-11-26 17:27:17.591453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:47.715 spare 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.715 [2024-11-26 17:27:17.600477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:47.715 [2024-11-26 17:27:17.602976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:47.715 [2024-11-26 17:27:17.603050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:47.715 [2024-11-26 17:27:17.603107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:47.715 [2024-11-26 17:27:17.603309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:47.715 [2024-11-26 17:27:17.603326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:47.715 [2024-11-26 17:27:17.603648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:47.715 [2024-11-26 17:27:17.603860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:47.715 [2024-11-26 17:27:17.603879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:47.715 [2024-11-26 17:27:17.604085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:47.715 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.716 "name": "raid_bdev1", 00:30:47.716 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:47.716 "strip_size_kb": 0, 00:30:47.716 "state": "online", 00:30:47.716 "raid_level": "raid1", 00:30:47.716 "superblock": true, 00:30:47.716 "num_base_bdevs": 4, 00:30:47.716 "num_base_bdevs_discovered": 4, 00:30:47.716 "num_base_bdevs_operational": 4, 00:30:47.716 "base_bdevs_list": [ 00:30:47.716 { 00:30:47.716 "name": "BaseBdev1", 00:30:47.716 "uuid": "9e80b880-886b-5e79-bf6a-fd6f0ecd79bf", 00:30:47.716 "is_configured": true, 00:30:47.716 "data_offset": 2048, 00:30:47.716 "data_size": 63488 00:30:47.716 }, 00:30:47.716 { 00:30:47.716 "name": "BaseBdev2", 00:30:47.716 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:47.716 "is_configured": true, 00:30:47.716 "data_offset": 2048, 00:30:47.716 "data_size": 63488 00:30:47.716 }, 00:30:47.716 { 00:30:47.716 "name": "BaseBdev3", 00:30:47.716 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:47.716 "is_configured": true, 00:30:47.716 "data_offset": 2048, 00:30:47.716 "data_size": 63488 00:30:47.716 }, 00:30:47.716 { 00:30:47.716 "name": "BaseBdev4", 00:30:47.716 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:47.716 "is_configured": true, 00:30:47.716 "data_offset": 2048, 00:30:47.716 "data_size": 63488 00:30:47.716 } 00:30:47.716 ] 00:30:47.716 }' 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.716 17:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:47.975 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:47.975 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 [2024-11-26 17:27:18.072199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.234 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.234 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:30:48.234 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.234 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.234 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:48.235 [2024-11-26 17:27:18.171680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:48.235 "name": "raid_bdev1", 00:30:48.235 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:48.235 "strip_size_kb": 0, 00:30:48.235 "state": "online", 00:30:48.235 "raid_level": "raid1", 00:30:48.235 "superblock": true, 00:30:48.235 "num_base_bdevs": 4, 00:30:48.235 "num_base_bdevs_discovered": 3, 00:30:48.235 "num_base_bdevs_operational": 3, 00:30:48.235 "base_bdevs_list": [ 00:30:48.235 { 00:30:48.235 "name": null, 00:30:48.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.235 "is_configured": false, 00:30:48.235 "data_offset": 0, 00:30:48.235 "data_size": 63488 00:30:48.235 }, 00:30:48.235 { 00:30:48.235 "name": "BaseBdev2", 00:30:48.235 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:48.235 "is_configured": true, 00:30:48.235 "data_offset": 2048, 00:30:48.235 "data_size": 63488 00:30:48.235 }, 00:30:48.235 { 00:30:48.235 "name": "BaseBdev3", 00:30:48.235 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:48.235 "is_configured": true, 00:30:48.235 "data_offset": 2048, 00:30:48.235 "data_size": 63488 00:30:48.235 }, 00:30:48.235 { 00:30:48.235 "name": "BaseBdev4", 00:30:48.235 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:48.235 "is_configured": true, 00:30:48.235 "data_offset": 2048, 00:30:48.235 "data_size": 63488 00:30:48.235 } 00:30:48.235 ] 00:30:48.235 }' 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:48.235 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.235 [2024-11-26 17:27:18.281800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:48.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:48.235 Zero copy mechanism will not be used. 00:30:48.235 Running I/O for 60 seconds... 00:30:48.803 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:48.803 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.803 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.803 [2024-11-26 17:27:18.654947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:48.803 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.803 17:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:48.803 [2024-11-26 17:27:18.754610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:48.803 [2024-11-26 17:27:18.757210] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:48.803 [2024-11-26 17:27:18.876503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:48.803 [2024-11-26 17:27:18.878143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:49.061 [2024-11-26 17:27:19.098550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:49.061 [2024-11-26 17:27:19.099412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:49.578 138.00 IOPS, 414.00 MiB/s [2024-11-26T17:27:19.692Z] [2024-11-26 17:27:19.486540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:49.578 [2024-11-26 17:27:19.487445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:49.837 [2024-11-26 17:27:19.697431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:49.837 [2024-11-26 17:27:19.698413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:49.837 "name": "raid_bdev1", 00:30:49.837 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:49.837 "strip_size_kb": 0, 00:30:49.837 "state": "online", 00:30:49.837 "raid_level": "raid1", 00:30:49.837 "superblock": true, 00:30:49.837 "num_base_bdevs": 4, 00:30:49.837 "num_base_bdevs_discovered": 4, 00:30:49.837 "num_base_bdevs_operational": 4, 00:30:49.837 "process": { 00:30:49.837 "type": "rebuild", 00:30:49.837 "target": "spare", 00:30:49.837 "progress": { 00:30:49.837 "blocks": 10240, 00:30:49.837 "percent": 16 00:30:49.837 } 00:30:49.837 }, 00:30:49.837 "base_bdevs_list": [ 00:30:49.837 { 00:30:49.837 "name": "spare", 00:30:49.837 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:49.837 "is_configured": true, 00:30:49.837 "data_offset": 2048, 00:30:49.837 "data_size": 63488 00:30:49.837 }, 00:30:49.837 { 00:30:49.837 "name": "BaseBdev2", 00:30:49.837 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:49.837 "is_configured": true, 00:30:49.837 "data_offset": 2048, 00:30:49.837 "data_size": 63488 00:30:49.837 }, 00:30:49.837 { 00:30:49.837 "name": "BaseBdev3", 00:30:49.837 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:49.837 "is_configured": true, 00:30:49.837 "data_offset": 2048, 00:30:49.837 "data_size": 63488 00:30:49.837 }, 00:30:49.837 { 00:30:49.837 "name": "BaseBdev4", 00:30:49.837 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:49.837 "is_configured": true, 00:30:49.837 "data_offset": 2048, 00:30:49.837 "data_size": 63488 00:30:49.837 } 00:30:49.837 ] 00:30:49.837 }' 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.837 17:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.838 [2024-11-26 17:27:19.866682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:50.096 [2024-11-26 17:27:19.956618] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:50.096 [2024-11-26 17:27:19.961479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.096 [2024-11-26 17:27:19.961547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:50.096 [2024-11-26 17:27:19.961569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:50.096 [2024-11-26 17:27:20.009824] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.097 "name": "raid_bdev1", 00:30:50.097 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:50.097 "strip_size_kb": 0, 00:30:50.097 "state": "online", 00:30:50.097 "raid_level": "raid1", 00:30:50.097 "superblock": true, 00:30:50.097 "num_base_bdevs": 4, 00:30:50.097 "num_base_bdevs_discovered": 3, 00:30:50.097 "num_base_bdevs_operational": 3, 00:30:50.097 "base_bdevs_list": [ 00:30:50.097 { 00:30:50.097 "name": null, 00:30:50.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.097 "is_configured": false, 00:30:50.097 "data_offset": 0, 00:30:50.097 "data_size": 63488 00:30:50.097 }, 00:30:50.097 { 00:30:50.097 "name": "BaseBdev2", 00:30:50.097 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:50.097 "is_configured": true, 00:30:50.097 "data_offset": 2048, 00:30:50.097 "data_size": 63488 00:30:50.097 }, 00:30:50.097 { 00:30:50.097 "name": "BaseBdev3", 00:30:50.097 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:50.097 "is_configured": true, 00:30:50.097 "data_offset": 2048, 00:30:50.097 "data_size": 63488 00:30:50.097 }, 00:30:50.097 { 00:30:50.097 "name": "BaseBdev4", 00:30:50.097 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:50.097 "is_configured": true, 00:30:50.097 "data_offset": 2048, 00:30:50.097 "data_size": 63488 00:30:50.097 } 00:30:50.097 ] 00:30:50.097 }' 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.097 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.614 117.50 IOPS, 352.50 MiB/s [2024-11-26T17:27:20.728Z] 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.614 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:50.615 "name": "raid_bdev1", 00:30:50.615 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:50.615 "strip_size_kb": 0, 00:30:50.615 "state": "online", 00:30:50.615 "raid_level": "raid1", 00:30:50.615 "superblock": true, 00:30:50.615 "num_base_bdevs": 4, 00:30:50.615 "num_base_bdevs_discovered": 3, 00:30:50.615 "num_base_bdevs_operational": 3, 00:30:50.615 "base_bdevs_list": [ 00:30:50.615 { 00:30:50.615 "name": null, 00:30:50.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.615 "is_configured": false, 00:30:50.615 "data_offset": 0, 00:30:50.615 "data_size": 63488 00:30:50.615 }, 00:30:50.615 { 00:30:50.615 "name": "BaseBdev2", 00:30:50.615 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:50.615 "is_configured": true, 00:30:50.615 "data_offset": 2048, 00:30:50.615 "data_size": 63488 00:30:50.615 }, 00:30:50.615 { 00:30:50.615 "name": "BaseBdev3", 00:30:50.615 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:50.615 "is_configured": true, 00:30:50.615 "data_offset": 2048, 00:30:50.615 "data_size": 63488 00:30:50.615 }, 00:30:50.615 { 00:30:50.615 "name": "BaseBdev4", 00:30:50.615 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:50.615 "is_configured": true, 00:30:50.615 "data_offset": 2048, 00:30:50.615 "data_size": 63488 00:30:50.615 } 00:30:50.615 ] 00:30:50.615 }' 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.615 [2024-11-26 17:27:20.662441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.615 17:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:50.874 [2024-11-26 17:27:20.730179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:50.874 [2024-11-26 17:27:20.732875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:50.874 [2024-11-26 17:27:20.888394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:50.874 [2024-11-26 17:27:20.897761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:51.132 [2024-11-26 17:27:21.152886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:51.132 [2024-11-26 17:27:21.153723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:51.670 128.33 IOPS, 385.00 MiB/s [2024-11-26T17:27:21.785Z] [2024-11-26 17:27:21.625779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:51.671 [2024-11-26 17:27:21.626472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:51.671 "name": "raid_bdev1", 00:30:51.671 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:51.671 "strip_size_kb": 0, 00:30:51.671 "state": "online", 00:30:51.671 "raid_level": "raid1", 00:30:51.671 "superblock": true, 00:30:51.671 "num_base_bdevs": 4, 00:30:51.671 "num_base_bdevs_discovered": 4, 00:30:51.671 "num_base_bdevs_operational": 4, 00:30:51.671 "process": { 00:30:51.671 "type": "rebuild", 00:30:51.671 "target": "spare", 00:30:51.671 "progress": { 00:30:51.671 "blocks": 10240, 00:30:51.671 "percent": 16 00:30:51.671 } 00:30:51.671 }, 00:30:51.671 "base_bdevs_list": [ 00:30:51.671 { 00:30:51.671 "name": "spare", 00:30:51.671 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:51.671 "is_configured": true, 00:30:51.671 "data_offset": 2048, 00:30:51.671 "data_size": 63488 00:30:51.671 }, 00:30:51.671 { 00:30:51.671 "name": "BaseBdev2", 00:30:51.671 "uuid": "a76dc95e-1647-575f-a9f9-1cc13dbff4c9", 00:30:51.671 "is_configured": true, 00:30:51.671 "data_offset": 2048, 00:30:51.671 "data_size": 63488 00:30:51.671 }, 00:30:51.671 { 00:30:51.671 "name": "BaseBdev3", 00:30:51.671 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:51.671 "is_configured": true, 00:30:51.671 "data_offset": 2048, 00:30:51.671 "data_size": 63488 00:30:51.671 }, 00:30:51.671 { 00:30:51.671 "name": "BaseBdev4", 00:30:51.671 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:51.671 "is_configured": true, 00:30:51.671 "data_offset": 2048, 00:30:51.671 "data_size": 63488 00:30:51.671 } 00:30:51.671 ] 00:30:51.671 }' 00:30:51.671 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:51.928 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.928 17:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:51.928 [2024-11-26 17:27:21.852065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:51.928 [2024-11-26 17:27:21.960629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:52.186 [2024-11-26 17:27:22.160433] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:30:52.186 [2024-11-26 17:27:22.160772] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:52.186 "name": "raid_bdev1", 00:30:52.186 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:52.186 "strip_size_kb": 0, 00:30:52.186 "state": "online", 00:30:52.186 "raid_level": "raid1", 00:30:52.186 "superblock": true, 00:30:52.186 "num_base_bdevs": 4, 00:30:52.186 "num_base_bdevs_discovered": 3, 00:30:52.186 "num_base_bdevs_operational": 3, 00:30:52.186 "process": { 00:30:52.186 "type": "rebuild", 00:30:52.186 "target": "spare", 00:30:52.186 "progress": { 00:30:52.186 "blocks": 14336, 00:30:52.186 "percent": 22 00:30:52.186 } 00:30:52.186 }, 00:30:52.186 "base_bdevs_list": [ 00:30:52.186 { 00:30:52.186 "name": "spare", 00:30:52.186 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:52.186 "is_configured": true, 00:30:52.186 "data_offset": 2048, 00:30:52.186 "data_size": 63488 00:30:52.186 }, 00:30:52.186 { 00:30:52.186 "name": null, 00:30:52.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.186 "is_configured": false, 00:30:52.186 "data_offset": 0, 00:30:52.186 "data_size": 63488 00:30:52.186 }, 00:30:52.186 { 00:30:52.186 "name": "BaseBdev3", 00:30:52.186 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:52.186 "is_configured": true, 00:30:52.186 "data_offset": 2048, 00:30:52.186 "data_size": 63488 00:30:52.186 }, 00:30:52.186 { 00:30:52.186 "name": "BaseBdev4", 00:30:52.186 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:52.186 "is_configured": true, 00:30:52.186 "data_offset": 2048, 00:30:52.186 "data_size": 63488 00:30:52.186 } 00:30:52.186 ] 00:30:52.186 }' 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:52.186 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:52.445 113.50 IOPS, 340.50 MiB/s [2024-11-26T17:27:22.559Z] 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:52.445 "name": "raid_bdev1", 00:30:52.445 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:52.445 "strip_size_kb": 0, 00:30:52.445 "state": "online", 00:30:52.445 "raid_level": "raid1", 00:30:52.445 "superblock": true, 00:30:52.445 "num_base_bdevs": 4, 00:30:52.445 "num_base_bdevs_discovered": 3, 00:30:52.445 "num_base_bdevs_operational": 3, 00:30:52.445 "process": { 00:30:52.445 "type": "rebuild", 00:30:52.445 "target": "spare", 00:30:52.445 "progress": { 00:30:52.445 "blocks": 16384, 00:30:52.445 "percent": 25 00:30:52.445 } 00:30:52.445 }, 00:30:52.445 "base_bdevs_list": [ 00:30:52.445 { 00:30:52.445 "name": "spare", 00:30:52.445 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:52.445 "is_configured": true, 00:30:52.445 "data_offset": 2048, 00:30:52.445 "data_size": 63488 00:30:52.445 }, 00:30:52.445 { 00:30:52.445 "name": null, 00:30:52.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.445 "is_configured": false, 00:30:52.445 "data_offset": 0, 00:30:52.445 "data_size": 63488 00:30:52.445 }, 00:30:52.445 { 00:30:52.445 "name": "BaseBdev3", 00:30:52.445 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:52.445 "is_configured": true, 00:30:52.445 "data_offset": 2048, 00:30:52.445 "data_size": 63488 00:30:52.445 }, 00:30:52.445 { 00:30:52.445 "name": "BaseBdev4", 00:30:52.445 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:52.445 "is_configured": true, 00:30:52.445 "data_offset": 2048, 00:30:52.445 "data_size": 63488 00:30:52.445 } 00:30:52.445 ] 00:30:52.445 }' 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.445 17:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:52.445 [2024-11-26 17:27:22.522688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:52.703 [2024-11-26 17:27:22.747331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:53.269 [2024-11-26 17:27:23.190826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:53.527 99.80 IOPS, 299.40 MiB/s [2024-11-26T17:27:23.641Z] 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:53.527 "name": "raid_bdev1", 00:30:53.527 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:53.527 "strip_size_kb": 0, 00:30:53.527 "state": "online", 00:30:53.527 "raid_level": "raid1", 00:30:53.527 "superblock": true, 00:30:53.527 "num_base_bdevs": 4, 00:30:53.527 "num_base_bdevs_discovered": 3, 00:30:53.527 "num_base_bdevs_operational": 3, 00:30:53.527 "process": { 00:30:53.527 "type": "rebuild", 00:30:53.527 "target": "spare", 00:30:53.527 "progress": { 00:30:53.527 "blocks": 32768, 00:30:53.527 "percent": 51 00:30:53.527 } 00:30:53.527 }, 00:30:53.527 "base_bdevs_list": [ 00:30:53.527 { 00:30:53.527 "name": "spare", 00:30:53.527 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:53.527 "is_configured": true, 00:30:53.527 "data_offset": 2048, 00:30:53.527 "data_size": 63488 00:30:53.527 }, 00:30:53.527 { 00:30:53.527 "name": null, 00:30:53.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.527 "is_configured": false, 00:30:53.527 "data_offset": 0, 00:30:53.527 "data_size": 63488 00:30:53.527 }, 00:30:53.527 { 00:30:53.527 "name": "BaseBdev3", 00:30:53.527 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:53.527 "is_configured": true, 00:30:53.527 "data_offset": 2048, 00:30:53.527 "data_size": 63488 00:30:53.527 }, 00:30:53.527 { 00:30:53.527 "name": "BaseBdev4", 00:30:53.527 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:53.527 "is_configured": true, 00:30:53.527 "data_offset": 2048, 00:30:53.527 "data_size": 63488 00:30:53.527 } 00:30:53.527 ] 00:30:53.527 }' 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:53.527 [2024-11-26 17:27:23.530999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:53.527 17:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:54.721 88.83 IOPS, 266.50 MiB/s [2024-11-26T17:27:24.835Z] 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:54.721 "name": "raid_bdev1", 00:30:54.721 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:54.721 "strip_size_kb": 0, 00:30:54.721 "state": "online", 00:30:54.721 "raid_level": "raid1", 00:30:54.721 "superblock": true, 00:30:54.721 "num_base_bdevs": 4, 00:30:54.721 "num_base_bdevs_discovered": 3, 00:30:54.721 "num_base_bdevs_operational": 3, 00:30:54.721 "process": { 00:30:54.721 "type": "rebuild", 00:30:54.721 "target": "spare", 00:30:54.721 "progress": { 00:30:54.721 "blocks": 53248, 00:30:54.721 "percent": 83 00:30:54.721 } 00:30:54.721 }, 00:30:54.721 "base_bdevs_list": [ 00:30:54.721 { 00:30:54.721 "name": "spare", 00:30:54.721 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:54.721 "is_configured": true, 00:30:54.721 "data_offset": 2048, 00:30:54.721 "data_size": 63488 00:30:54.721 }, 00:30:54.721 { 00:30:54.721 "name": null, 00:30:54.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.721 "is_configured": false, 00:30:54.721 "data_offset": 0, 00:30:54.721 "data_size": 63488 00:30:54.721 }, 00:30:54.721 { 00:30:54.721 "name": "BaseBdev3", 00:30:54.721 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:54.721 "is_configured": true, 00:30:54.721 "data_offset": 2048, 00:30:54.721 "data_size": 63488 00:30:54.721 }, 00:30:54.721 { 00:30:54.721 "name": "BaseBdev4", 00:30:54.721 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:54.721 "is_configured": true, 00:30:54.721 "data_offset": 2048, 00:30:54.721 "data_size": 63488 00:30:54.721 } 00:30:54.721 ] 00:30:54.721 }' 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.721 17:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:54.721 [2024-11-26 17:27:24.763051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:54.980 [2024-11-26 17:27:24.985009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:55.238 [2024-11-26 17:27:25.212024] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:55.238 81.43 IOPS, 244.29 MiB/s [2024-11-26T17:27:25.352Z] [2024-11-26 17:27:25.311857] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:55.238 [2024-11-26 17:27:25.315844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:55.807 "name": "raid_bdev1", 00:30:55.807 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:55.807 "strip_size_kb": 0, 00:30:55.807 "state": "online", 00:30:55.807 "raid_level": "raid1", 00:30:55.807 "superblock": true, 00:30:55.807 "num_base_bdevs": 4, 00:30:55.807 "num_base_bdevs_discovered": 3, 00:30:55.807 "num_base_bdevs_operational": 3, 00:30:55.807 "base_bdevs_list": [ 00:30:55.807 { 00:30:55.807 "name": "spare", 00:30:55.807 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:55.807 "is_configured": true, 00:30:55.807 "data_offset": 2048, 00:30:55.807 "data_size": 63488 00:30:55.807 }, 00:30:55.807 { 00:30:55.807 "name": null, 00:30:55.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.807 "is_configured": false, 00:30:55.807 "data_offset": 0, 00:30:55.807 "data_size": 63488 00:30:55.807 }, 00:30:55.807 { 00:30:55.807 "name": "BaseBdev3", 00:30:55.807 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:55.807 "is_configured": true, 00:30:55.807 "data_offset": 2048, 00:30:55.807 "data_size": 63488 00:30:55.807 }, 00:30:55.807 { 00:30:55.807 "name": "BaseBdev4", 00:30:55.807 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:55.807 "is_configured": true, 00:30:55.807 "data_offset": 2048, 00:30:55.807 "data_size": 63488 00:30:55.807 } 00:30:55.807 ] 00:30:55.807 }' 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.807 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.067 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.067 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:56.067 "name": "raid_bdev1", 00:30:56.067 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:56.067 "strip_size_kb": 0, 00:30:56.067 "state": "online", 00:30:56.067 "raid_level": "raid1", 00:30:56.067 "superblock": true, 00:30:56.067 "num_base_bdevs": 4, 00:30:56.067 "num_base_bdevs_discovered": 3, 00:30:56.067 "num_base_bdevs_operational": 3, 00:30:56.067 "base_bdevs_list": [ 00:30:56.067 { 00:30:56.067 "name": "spare", 00:30:56.067 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": null, 00:30:56.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.067 "is_configured": false, 00:30:56.067 "data_offset": 0, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": "BaseBdev3", 00:30:56.067 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": "BaseBdev4", 00:30:56.067 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 } 00:30:56.067 ] 00:30:56.067 }' 00:30:56.067 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:56.067 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:56.067 17:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.067 "name": "raid_bdev1", 00:30:56.067 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:56.067 "strip_size_kb": 0, 00:30:56.067 "state": "online", 00:30:56.067 "raid_level": "raid1", 00:30:56.067 "superblock": true, 00:30:56.067 "num_base_bdevs": 4, 00:30:56.067 "num_base_bdevs_discovered": 3, 00:30:56.067 "num_base_bdevs_operational": 3, 00:30:56.067 "base_bdevs_list": [ 00:30:56.067 { 00:30:56.067 "name": "spare", 00:30:56.067 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": null, 00:30:56.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.067 "is_configured": false, 00:30:56.067 "data_offset": 0, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": "BaseBdev3", 00:30:56.067 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 }, 00:30:56.067 { 00:30:56.067 "name": "BaseBdev4", 00:30:56.067 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:56.067 "is_configured": true, 00:30:56.067 "data_offset": 2048, 00:30:56.067 "data_size": 63488 00:30:56.067 } 00:30:56.067 ] 00:30:56.067 }' 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.067 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.584 76.25 IOPS, 228.75 MiB/s [2024-11-26T17:27:26.698Z] 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:56.584 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.584 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.585 [2024-11-26 17:27:26.461564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:56.585 [2024-11-26 17:27:26.461721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:56.585 00:30:56.585 Latency(us) 00:30:56.585 [2024-11-26T17:27:26.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.585 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:56.585 raid_bdev1 : 8.25 75.19 225.56 0.00 0.00 19059.88 365.19 117912.16 00:30:56.585 [2024-11-26T17:27:26.699Z] =================================================================================================================== 00:30:56.585 [2024-11-26T17:27:26.699Z] Total : 75.19 225.56 0.00 0.00 19059.88 365.19 117912.16 00:30:56.585 [2024-11-26 17:27:26.541494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.585 [2024-11-26 17:27:26.541739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:56.585 [2024-11-26 17:27:26.541905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:56.585 [2024-11-26 17:27:26.542051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:56.585 { 00:30:56.585 "results": [ 00:30:56.585 { 00:30:56.585 "job": "raid_bdev1", 00:30:56.585 "core_mask": "0x1", 00:30:56.585 "workload": "randrw", 00:30:56.585 "percentage": 50, 00:30:56.585 "status": "finished", 00:30:56.585 "queue_depth": 2, 00:30:56.585 "io_size": 3145728, 00:30:56.585 "runtime": 8.24625, 00:30:56.585 "iops": 75.18569046536304, 00:30:56.585 "mibps": 225.5570713960891, 00:30:56.585 "io_failed": 0, 00:30:56.585 "io_timeout": 0, 00:30:56.585 "avg_latency_us": 19059.882202357818, 00:30:56.585 "min_latency_us": 365.1855421686747, 00:30:56.585 "max_latency_us": 117912.16064257028 00:30:56.585 } 00:30:56.585 ], 00:30:56.585 "core_count": 1 00:30:56.585 } 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:56.585 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:30:56.844 /dev/nbd0 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:56.844 1+0 records in 00:30:56.844 1+0 records out 00:30:56.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489138 s, 8.4 MB/s 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:56.844 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:56.845 17:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:57.104 /dev/nbd1 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:57.104 1+0 records in 00:30:57.104 1+0 records out 00:30:57.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460344 s, 8.9 MB/s 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:57.104 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:57.363 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:57.621 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:57.622 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:57.879 /dev/nbd1 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:57.879 1+0 records in 00:30:57.879 1+0 records out 00:30:57.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432127 s, 9.5 MB/s 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:57.879 17:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:58.137 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:58.137 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:58.137 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:58.137 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:58.137 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:58.138 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.138 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.396 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.656 [2024-11-26 17:27:28.563018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:58.656 [2024-11-26 17:27:28.563238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.656 [2024-11-26 17:27:28.563287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:58.656 [2024-11-26 17:27:28.563302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.656 [2024-11-26 17:27:28.566525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.656 [2024-11-26 17:27:28.566594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:58.656 [2024-11-26 17:27:28.566730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:58.656 [2024-11-26 17:27:28.566812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:58.656 spare 00:30:58.656 [2024-11-26 17:27:28.567005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:58.656 [2024-11-26 17:27:28.567170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.656 [2024-11-26 17:27:28.667133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:58.656 [2024-11-26 17:27:28.667391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:58.656 [2024-11-26 17:27:28.667925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:30:58.656 [2024-11-26 17:27:28.668276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:58.656 [2024-11-26 17:27:28.668397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:58.656 [2024-11-26 17:27:28.668796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.656 "name": "raid_bdev1", 00:30:58.656 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:58.656 "strip_size_kb": 0, 00:30:58.656 "state": "online", 00:30:58.656 "raid_level": "raid1", 00:30:58.656 "superblock": true, 00:30:58.656 "num_base_bdevs": 4, 00:30:58.656 "num_base_bdevs_discovered": 3, 00:30:58.656 "num_base_bdevs_operational": 3, 00:30:58.656 "base_bdevs_list": [ 00:30:58.656 { 00:30:58.656 "name": "spare", 00:30:58.656 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:58.656 "is_configured": true, 00:30:58.656 "data_offset": 2048, 00:30:58.656 "data_size": 63488 00:30:58.656 }, 00:30:58.656 { 00:30:58.656 "name": null, 00:30:58.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.656 "is_configured": false, 00:30:58.656 "data_offset": 2048, 00:30:58.656 "data_size": 63488 00:30:58.656 }, 00:30:58.656 { 00:30:58.656 "name": "BaseBdev3", 00:30:58.656 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:58.656 "is_configured": true, 00:30:58.656 "data_offset": 2048, 00:30:58.656 "data_size": 63488 00:30:58.656 }, 00:30:58.656 { 00:30:58.656 "name": "BaseBdev4", 00:30:58.656 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:58.656 "is_configured": true, 00:30:58.656 "data_offset": 2048, 00:30:58.656 "data_size": 63488 00:30:58.656 } 00:30:58.656 ] 00:30:58.656 }' 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.656 17:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:59.230 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:59.231 "name": "raid_bdev1", 00:30:59.231 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:59.231 "strip_size_kb": 0, 00:30:59.231 "state": "online", 00:30:59.231 "raid_level": "raid1", 00:30:59.231 "superblock": true, 00:30:59.231 "num_base_bdevs": 4, 00:30:59.231 "num_base_bdevs_discovered": 3, 00:30:59.231 "num_base_bdevs_operational": 3, 00:30:59.231 "base_bdevs_list": [ 00:30:59.231 { 00:30:59.231 "name": "spare", 00:30:59.231 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:30:59.231 "is_configured": true, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": null, 00:30:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.231 "is_configured": false, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": "BaseBdev3", 00:30:59.231 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:59.231 "is_configured": true, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": "BaseBdev4", 00:30:59.231 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:59.231 "is_configured": true, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 } 00:30:59.231 ] 00:30:59.231 }' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.231 [2024-11-26 17:27:29.262349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.231 "name": "raid_bdev1", 00:30:59.231 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:30:59.231 "strip_size_kb": 0, 00:30:59.231 "state": "online", 00:30:59.231 "raid_level": "raid1", 00:30:59.231 "superblock": true, 00:30:59.231 "num_base_bdevs": 4, 00:30:59.231 "num_base_bdevs_discovered": 2, 00:30:59.231 "num_base_bdevs_operational": 2, 00:30:59.231 "base_bdevs_list": [ 00:30:59.231 { 00:30:59.231 "name": null, 00:30:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.231 "is_configured": false, 00:30:59.231 "data_offset": 0, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": null, 00:30:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.231 "is_configured": false, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": "BaseBdev3", 00:30:59.231 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:30:59.231 "is_configured": true, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 }, 00:30:59.231 { 00:30:59.231 "name": "BaseBdev4", 00:30:59.231 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:30:59.231 "is_configured": true, 00:30:59.231 "data_offset": 2048, 00:30:59.231 "data_size": 63488 00:30:59.231 } 00:30:59.231 ] 00:30:59.231 }' 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.231 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.807 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:59.807 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.807 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.807 [2024-11-26 17:27:29.713805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.807 [2024-11-26 17:27:29.714038] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:59.807 [2024-11-26 17:27:29.714057] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:59.807 [2024-11-26 17:27:29.714112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.807 [2024-11-26 17:27:29.730781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:30:59.807 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.807 17:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:59.808 [2024-11-26 17:27:29.733080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:00.745 "name": "raid_bdev1", 00:31:00.745 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:00.745 "strip_size_kb": 0, 00:31:00.745 "state": "online", 00:31:00.745 "raid_level": "raid1", 00:31:00.745 "superblock": true, 00:31:00.745 "num_base_bdevs": 4, 00:31:00.745 "num_base_bdevs_discovered": 3, 00:31:00.745 "num_base_bdevs_operational": 3, 00:31:00.745 "process": { 00:31:00.745 "type": "rebuild", 00:31:00.745 "target": "spare", 00:31:00.745 "progress": { 00:31:00.745 "blocks": 20480, 00:31:00.745 "percent": 32 00:31:00.745 } 00:31:00.745 }, 00:31:00.745 "base_bdevs_list": [ 00:31:00.745 { 00:31:00.745 "name": "spare", 00:31:00.745 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:31:00.745 "is_configured": true, 00:31:00.745 "data_offset": 2048, 00:31:00.745 "data_size": 63488 00:31:00.745 }, 00:31:00.745 { 00:31:00.745 "name": null, 00:31:00.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.745 "is_configured": false, 00:31:00.745 "data_offset": 2048, 00:31:00.745 "data_size": 63488 00:31:00.745 }, 00:31:00.745 { 00:31:00.745 "name": "BaseBdev3", 00:31:00.745 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:00.745 "is_configured": true, 00:31:00.745 "data_offset": 2048, 00:31:00.745 "data_size": 63488 00:31:00.745 }, 00:31:00.745 { 00:31:00.745 "name": "BaseBdev4", 00:31:00.745 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:00.745 "is_configured": true, 00:31:00.745 "data_offset": 2048, 00:31:00.745 "data_size": 63488 00:31:00.745 } 00:31:00.745 ] 00:31:00.745 }' 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:00.745 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.005 [2024-11-26 17:27:30.884930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.005 [2024-11-26 17:27:30.940444] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:01.005 [2024-11-26 17:27:30.940731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.005 [2024-11-26 17:27:30.940763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.005 [2024-11-26 17:27:30.940776] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.005 17:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.005 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.005 "name": "raid_bdev1", 00:31:01.005 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:01.005 "strip_size_kb": 0, 00:31:01.005 "state": "online", 00:31:01.005 "raid_level": "raid1", 00:31:01.005 "superblock": true, 00:31:01.005 "num_base_bdevs": 4, 00:31:01.005 "num_base_bdevs_discovered": 2, 00:31:01.005 "num_base_bdevs_operational": 2, 00:31:01.005 "base_bdevs_list": [ 00:31:01.005 { 00:31:01.005 "name": null, 00:31:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.005 "is_configured": false, 00:31:01.005 "data_offset": 0, 00:31:01.005 "data_size": 63488 00:31:01.005 }, 00:31:01.005 { 00:31:01.005 "name": null, 00:31:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.005 "is_configured": false, 00:31:01.005 "data_offset": 2048, 00:31:01.005 "data_size": 63488 00:31:01.005 }, 00:31:01.005 { 00:31:01.005 "name": "BaseBdev3", 00:31:01.005 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:01.005 "is_configured": true, 00:31:01.005 "data_offset": 2048, 00:31:01.005 "data_size": 63488 00:31:01.005 }, 00:31:01.005 { 00:31:01.005 "name": "BaseBdev4", 00:31:01.005 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:01.005 "is_configured": true, 00:31:01.005 "data_offset": 2048, 00:31:01.005 "data_size": 63488 00:31:01.005 } 00:31:01.005 ] 00:31:01.005 }' 00:31:01.005 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.005 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.574 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:01.574 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.574 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.574 [2024-11-26 17:27:31.400123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:01.574 [2024-11-26 17:27:31.400209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.574 [2024-11-26 17:27:31.400255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:01.574 [2024-11-26 17:27:31.400268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.574 [2024-11-26 17:27:31.400874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.574 [2024-11-26 17:27:31.400897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:01.574 [2024-11-26 17:27:31.401037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:01.574 [2024-11-26 17:27:31.401053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:01.574 [2024-11-26 17:27:31.401072] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:01.574 [2024-11-26 17:27:31.401098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:01.574 [2024-11-26 17:27:31.417348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:31:01.574 spare 00:31:01.574 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.574 17:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:01.574 [2024-11-26 17:27:31.419883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:02.513 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:02.514 "name": "raid_bdev1", 00:31:02.514 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:02.514 "strip_size_kb": 0, 00:31:02.514 "state": "online", 00:31:02.514 "raid_level": "raid1", 00:31:02.514 "superblock": true, 00:31:02.514 "num_base_bdevs": 4, 00:31:02.514 "num_base_bdevs_discovered": 3, 00:31:02.514 "num_base_bdevs_operational": 3, 00:31:02.514 "process": { 00:31:02.514 "type": "rebuild", 00:31:02.514 "target": "spare", 00:31:02.514 "progress": { 00:31:02.514 "blocks": 20480, 00:31:02.514 "percent": 32 00:31:02.514 } 00:31:02.514 }, 00:31:02.514 "base_bdevs_list": [ 00:31:02.514 { 00:31:02.514 "name": "spare", 00:31:02.514 "uuid": "f689818d-a453-5d21-a64e-1aa0d183a2a5", 00:31:02.514 "is_configured": true, 00:31:02.514 "data_offset": 2048, 00:31:02.514 "data_size": 63488 00:31:02.514 }, 00:31:02.514 { 00:31:02.514 "name": null, 00:31:02.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.514 "is_configured": false, 00:31:02.514 "data_offset": 2048, 00:31:02.514 "data_size": 63488 00:31:02.514 }, 00:31:02.514 { 00:31:02.514 "name": "BaseBdev3", 00:31:02.514 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:02.514 "is_configured": true, 00:31:02.514 "data_offset": 2048, 00:31:02.514 "data_size": 63488 00:31:02.514 }, 00:31:02.514 { 00:31:02.514 "name": "BaseBdev4", 00:31:02.514 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:02.514 "is_configured": true, 00:31:02.514 "data_offset": 2048, 00:31:02.514 "data_size": 63488 00:31:02.514 } 00:31:02.514 ] 00:31:02.514 }' 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.514 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:02.514 [2024-11-26 17:27:32.551590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.774 [2024-11-26 17:27:32.627156] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:02.774 [2024-11-26 17:27:32.627267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:02.774 [2024-11-26 17:27:32.627287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.774 [2024-11-26 17:27:32.627302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.774 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:02.774 "name": "raid_bdev1", 00:31:02.774 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:02.774 "strip_size_kb": 0, 00:31:02.774 "state": "online", 00:31:02.774 "raid_level": "raid1", 00:31:02.774 "superblock": true, 00:31:02.774 "num_base_bdevs": 4, 00:31:02.774 "num_base_bdevs_discovered": 2, 00:31:02.774 "num_base_bdevs_operational": 2, 00:31:02.774 "base_bdevs_list": [ 00:31:02.774 { 00:31:02.774 "name": null, 00:31:02.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.775 "is_configured": false, 00:31:02.775 "data_offset": 0, 00:31:02.775 "data_size": 63488 00:31:02.775 }, 00:31:02.775 { 00:31:02.775 "name": null, 00:31:02.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.775 "is_configured": false, 00:31:02.775 "data_offset": 2048, 00:31:02.775 "data_size": 63488 00:31:02.775 }, 00:31:02.775 { 00:31:02.775 "name": "BaseBdev3", 00:31:02.775 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:02.775 "is_configured": true, 00:31:02.775 "data_offset": 2048, 00:31:02.775 "data_size": 63488 00:31:02.775 }, 00:31:02.775 { 00:31:02.775 "name": "BaseBdev4", 00:31:02.775 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:02.775 "is_configured": true, 00:31:02.775 "data_offset": 2048, 00:31:02.775 "data_size": 63488 00:31:02.775 } 00:31:02.775 ] 00:31:02.775 }' 00:31:02.775 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:02.775 17:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.034 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.298 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.298 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:03.298 "name": "raid_bdev1", 00:31:03.298 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:03.298 "strip_size_kb": 0, 00:31:03.298 "state": "online", 00:31:03.298 "raid_level": "raid1", 00:31:03.298 "superblock": true, 00:31:03.298 "num_base_bdevs": 4, 00:31:03.298 "num_base_bdevs_discovered": 2, 00:31:03.298 "num_base_bdevs_operational": 2, 00:31:03.298 "base_bdevs_list": [ 00:31:03.298 { 00:31:03.298 "name": null, 00:31:03.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.298 "is_configured": false, 00:31:03.298 "data_offset": 0, 00:31:03.298 "data_size": 63488 00:31:03.298 }, 00:31:03.298 { 00:31:03.298 "name": null, 00:31:03.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.298 "is_configured": false, 00:31:03.298 "data_offset": 2048, 00:31:03.298 "data_size": 63488 00:31:03.298 }, 00:31:03.298 { 00:31:03.298 "name": "BaseBdev3", 00:31:03.298 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:03.298 "is_configured": true, 00:31:03.298 "data_offset": 2048, 00:31:03.299 "data_size": 63488 00:31:03.299 }, 00:31:03.299 { 00:31:03.299 "name": "BaseBdev4", 00:31:03.299 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:03.299 "is_configured": true, 00:31:03.299 "data_offset": 2048, 00:31:03.299 "data_size": 63488 00:31:03.299 } 00:31:03.299 ] 00:31:03.299 }' 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.299 [2024-11-26 17:27:33.274839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:03.299 [2024-11-26 17:27:33.275085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:03.299 [2024-11-26 17:27:33.275165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:31:03.299 [2024-11-26 17:27:33.275280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:03.299 [2024-11-26 17:27:33.275834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:03.299 [2024-11-26 17:27:33.275859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:03.299 [2024-11-26 17:27:33.275952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:03.299 [2024-11-26 17:27:33.275973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:03.299 [2024-11-26 17:27:33.275983] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:03.299 [2024-11-26 17:27:33.275999] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:03.299 BaseBdev1 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.299 17:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.252 "name": "raid_bdev1", 00:31:04.252 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:04.252 "strip_size_kb": 0, 00:31:04.252 "state": "online", 00:31:04.252 "raid_level": "raid1", 00:31:04.252 "superblock": true, 00:31:04.252 "num_base_bdevs": 4, 00:31:04.252 "num_base_bdevs_discovered": 2, 00:31:04.252 "num_base_bdevs_operational": 2, 00:31:04.252 "base_bdevs_list": [ 00:31:04.252 { 00:31:04.252 "name": null, 00:31:04.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.252 "is_configured": false, 00:31:04.252 "data_offset": 0, 00:31:04.252 "data_size": 63488 00:31:04.252 }, 00:31:04.252 { 00:31:04.252 "name": null, 00:31:04.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.252 "is_configured": false, 00:31:04.252 "data_offset": 2048, 00:31:04.252 "data_size": 63488 00:31:04.252 }, 00:31:04.252 { 00:31:04.252 "name": "BaseBdev3", 00:31:04.252 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:04.252 "is_configured": true, 00:31:04.252 "data_offset": 2048, 00:31:04.252 "data_size": 63488 00:31:04.252 }, 00:31:04.252 { 00:31:04.252 "name": "BaseBdev4", 00:31:04.252 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:04.252 "is_configured": true, 00:31:04.252 "data_offset": 2048, 00:31:04.252 "data_size": 63488 00:31:04.252 } 00:31:04.252 ] 00:31:04.252 }' 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.252 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:04.819 "name": "raid_bdev1", 00:31:04.819 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:04.819 "strip_size_kb": 0, 00:31:04.819 "state": "online", 00:31:04.819 "raid_level": "raid1", 00:31:04.819 "superblock": true, 00:31:04.819 "num_base_bdevs": 4, 00:31:04.819 "num_base_bdevs_discovered": 2, 00:31:04.819 "num_base_bdevs_operational": 2, 00:31:04.819 "base_bdevs_list": [ 00:31:04.819 { 00:31:04.819 "name": null, 00:31:04.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.819 "is_configured": false, 00:31:04.819 "data_offset": 0, 00:31:04.819 "data_size": 63488 00:31:04.819 }, 00:31:04.819 { 00:31:04.819 "name": null, 00:31:04.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.819 "is_configured": false, 00:31:04.819 "data_offset": 2048, 00:31:04.819 "data_size": 63488 00:31:04.819 }, 00:31:04.819 { 00:31:04.819 "name": "BaseBdev3", 00:31:04.819 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:04.819 "is_configured": true, 00:31:04.819 "data_offset": 2048, 00:31:04.819 "data_size": 63488 00:31:04.819 }, 00:31:04.819 { 00:31:04.819 "name": "BaseBdev4", 00:31:04.819 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:04.819 "is_configured": true, 00:31:04.819 "data_offset": 2048, 00:31:04.819 "data_size": 63488 00:31:04.819 } 00:31:04.819 ] 00:31:04.819 }' 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.819 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.819 [2024-11-26 17:27:34.869799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:04.820 [2024-11-26 17:27:34.870184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:04.820 [2024-11-26 17:27:34.870335] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:04.820 request: 00:31:04.820 { 00:31:04.820 "base_bdev": "BaseBdev1", 00:31:04.820 "raid_bdev": "raid_bdev1", 00:31:04.820 "method": "bdev_raid_add_base_bdev", 00:31:04.820 "req_id": 1 00:31:04.820 } 00:31:04.820 Got JSON-RPC error response 00:31:04.820 response: 00:31:04.820 { 00:31:04.820 "code": -22, 00:31:04.820 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:04.820 } 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:04.820 17:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:06.198 "name": "raid_bdev1", 00:31:06.198 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:06.198 "strip_size_kb": 0, 00:31:06.198 "state": "online", 00:31:06.198 "raid_level": "raid1", 00:31:06.198 "superblock": true, 00:31:06.198 "num_base_bdevs": 4, 00:31:06.198 "num_base_bdevs_discovered": 2, 00:31:06.198 "num_base_bdevs_operational": 2, 00:31:06.198 "base_bdevs_list": [ 00:31:06.198 { 00:31:06.198 "name": null, 00:31:06.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.198 "is_configured": false, 00:31:06.198 "data_offset": 0, 00:31:06.198 "data_size": 63488 00:31:06.198 }, 00:31:06.198 { 00:31:06.198 "name": null, 00:31:06.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.198 "is_configured": false, 00:31:06.198 "data_offset": 2048, 00:31:06.198 "data_size": 63488 00:31:06.198 }, 00:31:06.198 { 00:31:06.198 "name": "BaseBdev3", 00:31:06.198 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:06.198 "is_configured": true, 00:31:06.198 "data_offset": 2048, 00:31:06.198 "data_size": 63488 00:31:06.198 }, 00:31:06.198 { 00:31:06.198 "name": "BaseBdev4", 00:31:06.198 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:06.198 "is_configured": true, 00:31:06.198 "data_offset": 2048, 00:31:06.198 "data_size": 63488 00:31:06.198 } 00:31:06.198 ] 00:31:06.198 }' 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:06.198 17:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:06.457 "name": "raid_bdev1", 00:31:06.457 "uuid": "8c13cadd-decc-4d6b-bb11-44dde1848de2", 00:31:06.457 "strip_size_kb": 0, 00:31:06.457 "state": "online", 00:31:06.457 "raid_level": "raid1", 00:31:06.457 "superblock": true, 00:31:06.457 "num_base_bdevs": 4, 00:31:06.457 "num_base_bdevs_discovered": 2, 00:31:06.457 "num_base_bdevs_operational": 2, 00:31:06.457 "base_bdevs_list": [ 00:31:06.457 { 00:31:06.457 "name": null, 00:31:06.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.457 "is_configured": false, 00:31:06.457 "data_offset": 0, 00:31:06.457 "data_size": 63488 00:31:06.457 }, 00:31:06.457 { 00:31:06.457 "name": null, 00:31:06.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.457 "is_configured": false, 00:31:06.457 "data_offset": 2048, 00:31:06.457 "data_size": 63488 00:31:06.457 }, 00:31:06.457 { 00:31:06.457 "name": "BaseBdev3", 00:31:06.457 "uuid": "4ec88163-a0be-5013-9a3e-7416bc1ce394", 00:31:06.457 "is_configured": true, 00:31:06.457 "data_offset": 2048, 00:31:06.457 "data_size": 63488 00:31:06.457 }, 00:31:06.457 { 00:31:06.457 "name": "BaseBdev4", 00:31:06.457 "uuid": "56e6395d-1f35-5d03-8b31-ae1601f716e8", 00:31:06.457 "is_configured": true, 00:31:06.457 "data_offset": 2048, 00:31:06.457 "data_size": 63488 00:31:06.457 } 00:31:06.457 ] 00:31:06.457 }' 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79316 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79316 ']' 00:31:06.457 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79316 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79316 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.458 killing process with pid 79316 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79316' 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79316 00:31:06.458 Received shutdown signal, test time was about 18.241737 seconds 00:31:06.458 00:31:06.458 Latency(us) 00:31:06.458 [2024-11-26T17:27:36.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.458 [2024-11-26T17:27:36.572Z] =================================================================================================================== 00:31:06.458 [2024-11-26T17:27:36.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.458 [2024-11-26 17:27:36.496724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:06.458 17:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79316 00:31:06.458 [2024-11-26 17:27:36.496869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.458 [2024-11-26 17:27:36.496951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.458 [2024-11-26 17:27:36.496963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:07.025 [2024-11-26 17:27:36.942724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:08.401 17:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:31:08.401 00:31:08.401 real 0m21.967s 00:31:08.401 user 0m28.451s 00:31:08.401 sys 0m3.174s 00:31:08.401 17:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.401 17:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.401 ************************************ 00:31:08.401 END TEST raid_rebuild_test_sb_io 00:31:08.401 ************************************ 00:31:08.401 17:27:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:31:08.401 17:27:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:31:08.401 17:27:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:08.401 17:27:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.401 17:27:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:08.401 ************************************ 00:31:08.401 START TEST raid5f_state_function_test 00:31:08.401 ************************************ 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:08.401 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80046 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:08.402 Process raid pid: 80046 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80046' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80046 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80046 ']' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.402 17:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.402 [2024-11-26 17:27:38.421108] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:31:08.402 [2024-11-26 17:27:38.421255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.660 [2024-11-26 17:27:38.610545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.660 [2024-11-26 17:27:38.763635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.918 [2024-11-26 17:27:39.005295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:08.918 [2024-11-26 17:27:39.005356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.487 [2024-11-26 17:27:39.294331] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:09.487 [2024-11-26 17:27:39.294402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:09.487 [2024-11-26 17:27:39.294415] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:09.487 [2024-11-26 17:27:39.294429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:09.487 [2024-11-26 17:27:39.294438] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:09.487 [2024-11-26 17:27:39.294450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.487 "name": "Existed_Raid", 00:31:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.487 "strip_size_kb": 64, 00:31:09.487 "state": "configuring", 00:31:09.487 "raid_level": "raid5f", 00:31:09.487 "superblock": false, 00:31:09.487 "num_base_bdevs": 3, 00:31:09.487 "num_base_bdevs_discovered": 0, 00:31:09.487 "num_base_bdevs_operational": 3, 00:31:09.487 "base_bdevs_list": [ 00:31:09.487 { 00:31:09.487 "name": "BaseBdev1", 00:31:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.487 "is_configured": false, 00:31:09.487 "data_offset": 0, 00:31:09.487 "data_size": 0 00:31:09.487 }, 00:31:09.487 { 00:31:09.487 "name": "BaseBdev2", 00:31:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.487 "is_configured": false, 00:31:09.487 "data_offset": 0, 00:31:09.487 "data_size": 0 00:31:09.487 }, 00:31:09.487 { 00:31:09.487 "name": "BaseBdev3", 00:31:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.487 "is_configured": false, 00:31:09.487 "data_offset": 0, 00:31:09.487 "data_size": 0 00:31:09.487 } 00:31:09.487 ] 00:31:09.487 }' 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.487 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 [2024-11-26 17:27:39.725822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:09.747 [2024-11-26 17:27:39.725872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 [2024-11-26 17:27:39.737809] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:09.747 [2024-11-26 17:27:39.737869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:09.747 [2024-11-26 17:27:39.737881] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:09.747 [2024-11-26 17:27:39.737896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:09.747 [2024-11-26 17:27:39.737905] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:09.747 [2024-11-26 17:27:39.737918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 [2024-11-26 17:27:39.790427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:09.747 BaseBdev1 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.747 [ 00:31:09.747 { 00:31:09.747 "name": "BaseBdev1", 00:31:09.747 "aliases": [ 00:31:09.747 "7d0c96c4-f833-4a70-8d70-65e3afd497f7" 00:31:09.747 ], 00:31:09.747 "product_name": "Malloc disk", 00:31:09.747 "block_size": 512, 00:31:09.747 "num_blocks": 65536, 00:31:09.747 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:09.747 "assigned_rate_limits": { 00:31:09.747 "rw_ios_per_sec": 0, 00:31:09.747 "rw_mbytes_per_sec": 0, 00:31:09.747 "r_mbytes_per_sec": 0, 00:31:09.747 "w_mbytes_per_sec": 0 00:31:09.747 }, 00:31:09.747 "claimed": true, 00:31:09.747 "claim_type": "exclusive_write", 00:31:09.747 "zoned": false, 00:31:09.747 "supported_io_types": { 00:31:09.747 "read": true, 00:31:09.747 "write": true, 00:31:09.747 "unmap": true, 00:31:09.747 "flush": true, 00:31:09.747 "reset": true, 00:31:09.747 "nvme_admin": false, 00:31:09.747 "nvme_io": false, 00:31:09.747 "nvme_io_md": false, 00:31:09.747 "write_zeroes": true, 00:31:09.747 "zcopy": true, 00:31:09.747 "get_zone_info": false, 00:31:09.747 "zone_management": false, 00:31:09.747 "zone_append": false, 00:31:09.747 "compare": false, 00:31:09.747 "compare_and_write": false, 00:31:09.747 "abort": true, 00:31:09.747 "seek_hole": false, 00:31:09.747 "seek_data": false, 00:31:09.747 "copy": true, 00:31:09.747 "nvme_iov_md": false 00:31:09.747 }, 00:31:09.747 "memory_domains": [ 00:31:09.747 { 00:31:09.747 "dma_device_id": "system", 00:31:09.747 "dma_device_type": 1 00:31:09.747 }, 00:31:09.747 { 00:31:09.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.747 "dma_device_type": 2 00:31:09.747 } 00:31:09.747 ], 00:31:09.747 "driver_specific": {} 00:31:09.747 } 00:31:09.747 ] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.747 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.006 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.006 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.006 "name": "Existed_Raid", 00:31:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.006 "strip_size_kb": 64, 00:31:10.006 "state": "configuring", 00:31:10.006 "raid_level": "raid5f", 00:31:10.006 "superblock": false, 00:31:10.006 "num_base_bdevs": 3, 00:31:10.006 "num_base_bdevs_discovered": 1, 00:31:10.006 "num_base_bdevs_operational": 3, 00:31:10.006 "base_bdevs_list": [ 00:31:10.006 { 00:31:10.006 "name": "BaseBdev1", 00:31:10.006 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:10.006 "is_configured": true, 00:31:10.006 "data_offset": 0, 00:31:10.006 "data_size": 65536 00:31:10.006 }, 00:31:10.006 { 00:31:10.006 "name": "BaseBdev2", 00:31:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.006 "is_configured": false, 00:31:10.006 "data_offset": 0, 00:31:10.006 "data_size": 0 00:31:10.006 }, 00:31:10.006 { 00:31:10.006 "name": "BaseBdev3", 00:31:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.006 "is_configured": false, 00:31:10.006 "data_offset": 0, 00:31:10.006 "data_size": 0 00:31:10.006 } 00:31:10.006 ] 00:31:10.006 }' 00:31:10.006 17:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.006 17:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.265 [2024-11-26 17:27:40.265827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:10.265 [2024-11-26 17:27:40.265899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.265 [2024-11-26 17:27:40.277867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:10.265 [2024-11-26 17:27:40.280225] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:10.265 [2024-11-26 17:27:40.280277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:10.265 [2024-11-26 17:27:40.280290] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:10.265 [2024-11-26 17:27:40.280303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.265 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.265 "name": "Existed_Raid", 00:31:10.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.265 "strip_size_kb": 64, 00:31:10.265 "state": "configuring", 00:31:10.265 "raid_level": "raid5f", 00:31:10.266 "superblock": false, 00:31:10.266 "num_base_bdevs": 3, 00:31:10.266 "num_base_bdevs_discovered": 1, 00:31:10.266 "num_base_bdevs_operational": 3, 00:31:10.266 "base_bdevs_list": [ 00:31:10.266 { 00:31:10.266 "name": "BaseBdev1", 00:31:10.266 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:10.266 "is_configured": true, 00:31:10.266 "data_offset": 0, 00:31:10.266 "data_size": 65536 00:31:10.266 }, 00:31:10.266 { 00:31:10.266 "name": "BaseBdev2", 00:31:10.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.266 "is_configured": false, 00:31:10.266 "data_offset": 0, 00:31:10.266 "data_size": 0 00:31:10.266 }, 00:31:10.266 { 00:31:10.266 "name": "BaseBdev3", 00:31:10.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.266 "is_configured": false, 00:31:10.266 "data_offset": 0, 00:31:10.266 "data_size": 0 00:31:10.266 } 00:31:10.266 ] 00:31:10.266 }' 00:31:10.266 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.266 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.932 BaseBdev2 00:31:10.932 [2024-11-26 17:27:40.792637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.932 [ 00:31:10.932 { 00:31:10.932 "name": "BaseBdev2", 00:31:10.932 "aliases": [ 00:31:10.932 "e510b4ac-3389-40c1-b572-3cc584673a27" 00:31:10.932 ], 00:31:10.932 "product_name": "Malloc disk", 00:31:10.932 "block_size": 512, 00:31:10.932 "num_blocks": 65536, 00:31:10.932 "uuid": "e510b4ac-3389-40c1-b572-3cc584673a27", 00:31:10.932 "assigned_rate_limits": { 00:31:10.932 "rw_ios_per_sec": 0, 00:31:10.932 "rw_mbytes_per_sec": 0, 00:31:10.932 "r_mbytes_per_sec": 0, 00:31:10.932 "w_mbytes_per_sec": 0 00:31:10.932 }, 00:31:10.932 "claimed": true, 00:31:10.932 "claim_type": "exclusive_write", 00:31:10.932 "zoned": false, 00:31:10.932 "supported_io_types": { 00:31:10.932 "read": true, 00:31:10.932 "write": true, 00:31:10.932 "unmap": true, 00:31:10.932 "flush": true, 00:31:10.932 "reset": true, 00:31:10.932 "nvme_admin": false, 00:31:10.932 "nvme_io": false, 00:31:10.932 "nvme_io_md": false, 00:31:10.932 "write_zeroes": true, 00:31:10.932 "zcopy": true, 00:31:10.932 "get_zone_info": false, 00:31:10.932 "zone_management": false, 00:31:10.932 "zone_append": false, 00:31:10.932 "compare": false, 00:31:10.932 "compare_and_write": false, 00:31:10.932 "abort": true, 00:31:10.932 "seek_hole": false, 00:31:10.932 "seek_data": false, 00:31:10.932 "copy": true, 00:31:10.932 "nvme_iov_md": false 00:31:10.932 }, 00:31:10.932 "memory_domains": [ 00:31:10.932 { 00:31:10.932 "dma_device_id": "system", 00:31:10.932 "dma_device_type": 1 00:31:10.932 }, 00:31:10.932 { 00:31:10.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.932 "dma_device_type": 2 00:31:10.932 } 00:31:10.932 ], 00:31:10.932 "driver_specific": {} 00:31:10.932 } 00:31:10.932 ] 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:10.932 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.933 "name": "Existed_Raid", 00:31:10.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.933 "strip_size_kb": 64, 00:31:10.933 "state": "configuring", 00:31:10.933 "raid_level": "raid5f", 00:31:10.933 "superblock": false, 00:31:10.933 "num_base_bdevs": 3, 00:31:10.933 "num_base_bdevs_discovered": 2, 00:31:10.933 "num_base_bdevs_operational": 3, 00:31:10.933 "base_bdevs_list": [ 00:31:10.933 { 00:31:10.933 "name": "BaseBdev1", 00:31:10.933 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:10.933 "is_configured": true, 00:31:10.933 "data_offset": 0, 00:31:10.933 "data_size": 65536 00:31:10.933 }, 00:31:10.933 { 00:31:10.933 "name": "BaseBdev2", 00:31:10.933 "uuid": "e510b4ac-3389-40c1-b572-3cc584673a27", 00:31:10.933 "is_configured": true, 00:31:10.933 "data_offset": 0, 00:31:10.933 "data_size": 65536 00:31:10.933 }, 00:31:10.933 { 00:31:10.933 "name": "BaseBdev3", 00:31:10.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.933 "is_configured": false, 00:31:10.933 "data_offset": 0, 00:31:10.933 "data_size": 0 00:31:10.933 } 00:31:10.933 ] 00:31:10.933 }' 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.933 17:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.212 [2024-11-26 17:27:41.300755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:11.212 [2024-11-26 17:27:41.300857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:11.212 [2024-11-26 17:27:41.300881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:11.212 [2024-11-26 17:27:41.301190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:11.212 BaseBdev3 00:31:11.212 [2024-11-26 17:27:41.307251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:11.212 [2024-11-26 17:27:41.307274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:11.212 [2024-11-26 17:27:41.307575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:11.212 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.470 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.470 [ 00:31:11.470 { 00:31:11.470 "name": "BaseBdev3", 00:31:11.470 "aliases": [ 00:31:11.470 "b3dfbfd8-9870-4dd5-a77e-67434501b1d5" 00:31:11.470 ], 00:31:11.470 "product_name": "Malloc disk", 00:31:11.470 "block_size": 512, 00:31:11.470 "num_blocks": 65536, 00:31:11.470 "uuid": "b3dfbfd8-9870-4dd5-a77e-67434501b1d5", 00:31:11.470 "assigned_rate_limits": { 00:31:11.470 "rw_ios_per_sec": 0, 00:31:11.470 "rw_mbytes_per_sec": 0, 00:31:11.470 "r_mbytes_per_sec": 0, 00:31:11.471 "w_mbytes_per_sec": 0 00:31:11.471 }, 00:31:11.471 "claimed": true, 00:31:11.471 "claim_type": "exclusive_write", 00:31:11.471 "zoned": false, 00:31:11.471 "supported_io_types": { 00:31:11.471 "read": true, 00:31:11.471 "write": true, 00:31:11.471 "unmap": true, 00:31:11.471 "flush": true, 00:31:11.471 "reset": true, 00:31:11.471 "nvme_admin": false, 00:31:11.471 "nvme_io": false, 00:31:11.471 "nvme_io_md": false, 00:31:11.471 "write_zeroes": true, 00:31:11.471 "zcopy": true, 00:31:11.471 "get_zone_info": false, 00:31:11.471 "zone_management": false, 00:31:11.471 "zone_append": false, 00:31:11.471 "compare": false, 00:31:11.471 "compare_and_write": false, 00:31:11.471 "abort": true, 00:31:11.471 "seek_hole": false, 00:31:11.471 "seek_data": false, 00:31:11.471 "copy": true, 00:31:11.471 "nvme_iov_md": false 00:31:11.471 }, 00:31:11.471 "memory_domains": [ 00:31:11.471 { 00:31:11.471 "dma_device_id": "system", 00:31:11.471 "dma_device_type": 1 00:31:11.471 }, 00:31:11.471 { 00:31:11.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.471 "dma_device_type": 2 00:31:11.471 } 00:31:11.471 ], 00:31:11.471 "driver_specific": {} 00:31:11.471 } 00:31:11.471 ] 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:11.471 "name": "Existed_Raid", 00:31:11.471 "uuid": "baac6d13-fe6f-4936-be98-c3423a2f3a21", 00:31:11.471 "strip_size_kb": 64, 00:31:11.471 "state": "online", 00:31:11.471 "raid_level": "raid5f", 00:31:11.471 "superblock": false, 00:31:11.471 "num_base_bdevs": 3, 00:31:11.471 "num_base_bdevs_discovered": 3, 00:31:11.471 "num_base_bdevs_operational": 3, 00:31:11.471 "base_bdevs_list": [ 00:31:11.471 { 00:31:11.471 "name": "BaseBdev1", 00:31:11.471 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:11.471 "is_configured": true, 00:31:11.471 "data_offset": 0, 00:31:11.471 "data_size": 65536 00:31:11.471 }, 00:31:11.471 { 00:31:11.471 "name": "BaseBdev2", 00:31:11.471 "uuid": "e510b4ac-3389-40c1-b572-3cc584673a27", 00:31:11.471 "is_configured": true, 00:31:11.471 "data_offset": 0, 00:31:11.471 "data_size": 65536 00:31:11.471 }, 00:31:11.471 { 00:31:11.471 "name": "BaseBdev3", 00:31:11.471 "uuid": "b3dfbfd8-9870-4dd5-a77e-67434501b1d5", 00:31:11.471 "is_configured": true, 00:31:11.471 "data_offset": 0, 00:31:11.471 "data_size": 65536 00:31:11.471 } 00:31:11.471 ] 00:31:11.471 }' 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:11.471 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.729 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.729 [2024-11-26 17:27:41.822195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:11.989 "name": "Existed_Raid", 00:31:11.989 "aliases": [ 00:31:11.989 "baac6d13-fe6f-4936-be98-c3423a2f3a21" 00:31:11.989 ], 00:31:11.989 "product_name": "Raid Volume", 00:31:11.989 "block_size": 512, 00:31:11.989 "num_blocks": 131072, 00:31:11.989 "uuid": "baac6d13-fe6f-4936-be98-c3423a2f3a21", 00:31:11.989 "assigned_rate_limits": { 00:31:11.989 "rw_ios_per_sec": 0, 00:31:11.989 "rw_mbytes_per_sec": 0, 00:31:11.989 "r_mbytes_per_sec": 0, 00:31:11.989 "w_mbytes_per_sec": 0 00:31:11.989 }, 00:31:11.989 "claimed": false, 00:31:11.989 "zoned": false, 00:31:11.989 "supported_io_types": { 00:31:11.989 "read": true, 00:31:11.989 "write": true, 00:31:11.989 "unmap": false, 00:31:11.989 "flush": false, 00:31:11.989 "reset": true, 00:31:11.989 "nvme_admin": false, 00:31:11.989 "nvme_io": false, 00:31:11.989 "nvme_io_md": false, 00:31:11.989 "write_zeroes": true, 00:31:11.989 "zcopy": false, 00:31:11.989 "get_zone_info": false, 00:31:11.989 "zone_management": false, 00:31:11.989 "zone_append": false, 00:31:11.989 "compare": false, 00:31:11.989 "compare_and_write": false, 00:31:11.989 "abort": false, 00:31:11.989 "seek_hole": false, 00:31:11.989 "seek_data": false, 00:31:11.989 "copy": false, 00:31:11.989 "nvme_iov_md": false 00:31:11.989 }, 00:31:11.989 "driver_specific": { 00:31:11.989 "raid": { 00:31:11.989 "uuid": "baac6d13-fe6f-4936-be98-c3423a2f3a21", 00:31:11.989 "strip_size_kb": 64, 00:31:11.989 "state": "online", 00:31:11.989 "raid_level": "raid5f", 00:31:11.989 "superblock": false, 00:31:11.989 "num_base_bdevs": 3, 00:31:11.989 "num_base_bdevs_discovered": 3, 00:31:11.989 "num_base_bdevs_operational": 3, 00:31:11.989 "base_bdevs_list": [ 00:31:11.989 { 00:31:11.989 "name": "BaseBdev1", 00:31:11.989 "uuid": "7d0c96c4-f833-4a70-8d70-65e3afd497f7", 00:31:11.989 "is_configured": true, 00:31:11.989 "data_offset": 0, 00:31:11.989 "data_size": 65536 00:31:11.989 }, 00:31:11.989 { 00:31:11.989 "name": "BaseBdev2", 00:31:11.989 "uuid": "e510b4ac-3389-40c1-b572-3cc584673a27", 00:31:11.989 "is_configured": true, 00:31:11.989 "data_offset": 0, 00:31:11.989 "data_size": 65536 00:31:11.989 }, 00:31:11.989 { 00:31:11.989 "name": "BaseBdev3", 00:31:11.989 "uuid": "b3dfbfd8-9870-4dd5-a77e-67434501b1d5", 00:31:11.989 "is_configured": true, 00:31:11.989 "data_offset": 0, 00:31:11.989 "data_size": 65536 00:31:11.989 } 00:31:11.989 ] 00:31:11.989 } 00:31:11.989 } 00:31:11.989 }' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:11.989 BaseBdev2 00:31:11.989 BaseBdev3' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.989 17:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.989 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.989 [2024-11-26 17:27:42.089792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.247 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:12.247 "name": "Existed_Raid", 00:31:12.247 "uuid": "baac6d13-fe6f-4936-be98-c3423a2f3a21", 00:31:12.247 "strip_size_kb": 64, 00:31:12.247 "state": "online", 00:31:12.247 "raid_level": "raid5f", 00:31:12.247 "superblock": false, 00:31:12.247 "num_base_bdevs": 3, 00:31:12.247 "num_base_bdevs_discovered": 2, 00:31:12.247 "num_base_bdevs_operational": 2, 00:31:12.247 "base_bdevs_list": [ 00:31:12.247 { 00:31:12.247 "name": null, 00:31:12.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.247 "is_configured": false, 00:31:12.247 "data_offset": 0, 00:31:12.247 "data_size": 65536 00:31:12.247 }, 00:31:12.247 { 00:31:12.247 "name": "BaseBdev2", 00:31:12.247 "uuid": "e510b4ac-3389-40c1-b572-3cc584673a27", 00:31:12.247 "is_configured": true, 00:31:12.247 "data_offset": 0, 00:31:12.247 "data_size": 65536 00:31:12.247 }, 00:31:12.247 { 00:31:12.247 "name": "BaseBdev3", 00:31:12.247 "uuid": "b3dfbfd8-9870-4dd5-a77e-67434501b1d5", 00:31:12.247 "is_configured": true, 00:31:12.247 "data_offset": 0, 00:31:12.247 "data_size": 65536 00:31:12.247 } 00:31:12.248 ] 00:31:12.248 }' 00:31:12.248 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:12.248 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.507 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.766 [2024-11-26 17:27:42.640571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:12.766 [2024-11-26 17:27:42.640865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:12.766 [2024-11-26 17:27:42.744672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.766 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.766 [2024-11-26 17:27:42.800662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:12.766 [2024-11-26 17:27:42.800726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.024 17:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.024 BaseBdev2 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:13.024 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.025 [ 00:31:13.025 { 00:31:13.025 "name": "BaseBdev2", 00:31:13.025 "aliases": [ 00:31:13.025 "02313e3d-6dd1-4416-a011-dc392fe127ac" 00:31:13.025 ], 00:31:13.025 "product_name": "Malloc disk", 00:31:13.025 "block_size": 512, 00:31:13.025 "num_blocks": 65536, 00:31:13.025 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:13.025 "assigned_rate_limits": { 00:31:13.025 "rw_ios_per_sec": 0, 00:31:13.025 "rw_mbytes_per_sec": 0, 00:31:13.025 "r_mbytes_per_sec": 0, 00:31:13.025 "w_mbytes_per_sec": 0 00:31:13.025 }, 00:31:13.025 "claimed": false, 00:31:13.025 "zoned": false, 00:31:13.025 "supported_io_types": { 00:31:13.025 "read": true, 00:31:13.025 "write": true, 00:31:13.025 "unmap": true, 00:31:13.025 "flush": true, 00:31:13.025 "reset": true, 00:31:13.025 "nvme_admin": false, 00:31:13.025 "nvme_io": false, 00:31:13.025 "nvme_io_md": false, 00:31:13.025 "write_zeroes": true, 00:31:13.025 "zcopy": true, 00:31:13.025 "get_zone_info": false, 00:31:13.025 "zone_management": false, 00:31:13.025 "zone_append": false, 00:31:13.025 "compare": false, 00:31:13.025 "compare_and_write": false, 00:31:13.025 "abort": true, 00:31:13.025 "seek_hole": false, 00:31:13.025 "seek_data": false, 00:31:13.025 "copy": true, 00:31:13.025 "nvme_iov_md": false 00:31:13.025 }, 00:31:13.025 "memory_domains": [ 00:31:13.025 { 00:31:13.025 "dma_device_id": "system", 00:31:13.025 "dma_device_type": 1 00:31:13.025 }, 00:31:13.025 { 00:31:13.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.025 "dma_device_type": 2 00:31:13.025 } 00:31:13.025 ], 00:31:13.025 "driver_specific": {} 00:31:13.025 } 00:31:13.025 ] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.025 BaseBdev3 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.025 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.025 [ 00:31:13.025 { 00:31:13.025 "name": "BaseBdev3", 00:31:13.025 "aliases": [ 00:31:13.025 "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e" 00:31:13.025 ], 00:31:13.025 "product_name": "Malloc disk", 00:31:13.025 "block_size": 512, 00:31:13.025 "num_blocks": 65536, 00:31:13.025 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:13.025 "assigned_rate_limits": { 00:31:13.025 "rw_ios_per_sec": 0, 00:31:13.025 "rw_mbytes_per_sec": 0, 00:31:13.025 "r_mbytes_per_sec": 0, 00:31:13.025 "w_mbytes_per_sec": 0 00:31:13.025 }, 00:31:13.025 "claimed": false, 00:31:13.025 "zoned": false, 00:31:13.025 "supported_io_types": { 00:31:13.025 "read": true, 00:31:13.025 "write": true, 00:31:13.025 "unmap": true, 00:31:13.025 "flush": true, 00:31:13.025 "reset": true, 00:31:13.025 "nvme_admin": false, 00:31:13.284 "nvme_io": false, 00:31:13.284 "nvme_io_md": false, 00:31:13.284 "write_zeroes": true, 00:31:13.284 "zcopy": true, 00:31:13.284 "get_zone_info": false, 00:31:13.284 "zone_management": false, 00:31:13.284 "zone_append": false, 00:31:13.284 "compare": false, 00:31:13.284 "compare_and_write": false, 00:31:13.284 "abort": true, 00:31:13.284 "seek_hole": false, 00:31:13.284 "seek_data": false, 00:31:13.284 "copy": true, 00:31:13.284 "nvme_iov_md": false 00:31:13.284 }, 00:31:13.284 "memory_domains": [ 00:31:13.284 { 00:31:13.284 "dma_device_id": "system", 00:31:13.284 "dma_device_type": 1 00:31:13.284 }, 00:31:13.284 { 00:31:13.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.284 "dma_device_type": 2 00:31:13.284 } 00:31:13.284 ], 00:31:13.284 "driver_specific": {} 00:31:13.284 } 00:31:13.284 ] 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.284 [2024-11-26 17:27:43.144510] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:13.284 [2024-11-26 17:27:43.144743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:13.284 [2024-11-26 17:27:43.144900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:13.284 [2024-11-26 17:27:43.147658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:13.284 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:13.285 "name": "Existed_Raid", 00:31:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.285 "strip_size_kb": 64, 00:31:13.285 "state": "configuring", 00:31:13.285 "raid_level": "raid5f", 00:31:13.285 "superblock": false, 00:31:13.285 "num_base_bdevs": 3, 00:31:13.285 "num_base_bdevs_discovered": 2, 00:31:13.285 "num_base_bdevs_operational": 3, 00:31:13.285 "base_bdevs_list": [ 00:31:13.285 { 00:31:13.285 "name": "BaseBdev1", 00:31:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.285 "is_configured": false, 00:31:13.285 "data_offset": 0, 00:31:13.285 "data_size": 0 00:31:13.285 }, 00:31:13.285 { 00:31:13.285 "name": "BaseBdev2", 00:31:13.285 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:13.285 "is_configured": true, 00:31:13.285 "data_offset": 0, 00:31:13.285 "data_size": 65536 00:31:13.285 }, 00:31:13.285 { 00:31:13.285 "name": "BaseBdev3", 00:31:13.285 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:13.285 "is_configured": true, 00:31:13.285 "data_offset": 0, 00:31:13.285 "data_size": 65536 00:31:13.285 } 00:31:13.285 ] 00:31:13.285 }' 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:13.285 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.544 [2024-11-26 17:27:43.523975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:13.544 "name": "Existed_Raid", 00:31:13.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.544 "strip_size_kb": 64, 00:31:13.544 "state": "configuring", 00:31:13.544 "raid_level": "raid5f", 00:31:13.544 "superblock": false, 00:31:13.544 "num_base_bdevs": 3, 00:31:13.544 "num_base_bdevs_discovered": 1, 00:31:13.544 "num_base_bdevs_operational": 3, 00:31:13.544 "base_bdevs_list": [ 00:31:13.544 { 00:31:13.544 "name": "BaseBdev1", 00:31:13.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.544 "is_configured": false, 00:31:13.544 "data_offset": 0, 00:31:13.544 "data_size": 0 00:31:13.544 }, 00:31:13.544 { 00:31:13.544 "name": null, 00:31:13.544 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:13.544 "is_configured": false, 00:31:13.544 "data_offset": 0, 00:31:13.544 "data_size": 65536 00:31:13.544 }, 00:31:13.544 { 00:31:13.544 "name": "BaseBdev3", 00:31:13.544 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:13.544 "is_configured": true, 00:31:13.544 "data_offset": 0, 00:31:13.544 "data_size": 65536 00:31:13.544 } 00:31:13.544 ] 00:31:13.544 }' 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:13.544 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.112 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.113 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.113 17:27:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:14.113 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.113 17:27:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.113 [2024-11-26 17:27:44.054130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:14.113 BaseBdev1 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.113 [ 00:31:14.113 { 00:31:14.113 "name": "BaseBdev1", 00:31:14.113 "aliases": [ 00:31:14.113 "fb127a82-c538-4eaa-bf30-9294f07c4f02" 00:31:14.113 ], 00:31:14.113 "product_name": "Malloc disk", 00:31:14.113 "block_size": 512, 00:31:14.113 "num_blocks": 65536, 00:31:14.113 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:14.113 "assigned_rate_limits": { 00:31:14.113 "rw_ios_per_sec": 0, 00:31:14.113 "rw_mbytes_per_sec": 0, 00:31:14.113 "r_mbytes_per_sec": 0, 00:31:14.113 "w_mbytes_per_sec": 0 00:31:14.113 }, 00:31:14.113 "claimed": true, 00:31:14.113 "claim_type": "exclusive_write", 00:31:14.113 "zoned": false, 00:31:14.113 "supported_io_types": { 00:31:14.113 "read": true, 00:31:14.113 "write": true, 00:31:14.113 "unmap": true, 00:31:14.113 "flush": true, 00:31:14.113 "reset": true, 00:31:14.113 "nvme_admin": false, 00:31:14.113 "nvme_io": false, 00:31:14.113 "nvme_io_md": false, 00:31:14.113 "write_zeroes": true, 00:31:14.113 "zcopy": true, 00:31:14.113 "get_zone_info": false, 00:31:14.113 "zone_management": false, 00:31:14.113 "zone_append": false, 00:31:14.113 "compare": false, 00:31:14.113 "compare_and_write": false, 00:31:14.113 "abort": true, 00:31:14.113 "seek_hole": false, 00:31:14.113 "seek_data": false, 00:31:14.113 "copy": true, 00:31:14.113 "nvme_iov_md": false 00:31:14.113 }, 00:31:14.113 "memory_domains": [ 00:31:14.113 { 00:31:14.113 "dma_device_id": "system", 00:31:14.113 "dma_device_type": 1 00:31:14.113 }, 00:31:14.113 { 00:31:14.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.113 "dma_device_type": 2 00:31:14.113 } 00:31:14.113 ], 00:31:14.113 "driver_specific": {} 00:31:14.113 } 00:31:14.113 ] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.113 "name": "Existed_Raid", 00:31:14.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.113 "strip_size_kb": 64, 00:31:14.113 "state": "configuring", 00:31:14.113 "raid_level": "raid5f", 00:31:14.113 "superblock": false, 00:31:14.113 "num_base_bdevs": 3, 00:31:14.113 "num_base_bdevs_discovered": 2, 00:31:14.113 "num_base_bdevs_operational": 3, 00:31:14.113 "base_bdevs_list": [ 00:31:14.113 { 00:31:14.113 "name": "BaseBdev1", 00:31:14.113 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:14.113 "is_configured": true, 00:31:14.113 "data_offset": 0, 00:31:14.113 "data_size": 65536 00:31:14.113 }, 00:31:14.113 { 00:31:14.113 "name": null, 00:31:14.113 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:14.113 "is_configured": false, 00:31:14.113 "data_offset": 0, 00:31:14.113 "data_size": 65536 00:31:14.113 }, 00:31:14.113 { 00:31:14.113 "name": "BaseBdev3", 00:31:14.113 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:14.113 "is_configured": true, 00:31:14.113 "data_offset": 0, 00:31:14.113 "data_size": 65536 00:31:14.113 } 00:31:14.113 ] 00:31:14.113 }' 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.113 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.683 [2024-11-26 17:27:44.577661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.683 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.683 "name": "Existed_Raid", 00:31:14.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.683 "strip_size_kb": 64, 00:31:14.683 "state": "configuring", 00:31:14.683 "raid_level": "raid5f", 00:31:14.683 "superblock": false, 00:31:14.683 "num_base_bdevs": 3, 00:31:14.683 "num_base_bdevs_discovered": 1, 00:31:14.683 "num_base_bdevs_operational": 3, 00:31:14.683 "base_bdevs_list": [ 00:31:14.683 { 00:31:14.683 "name": "BaseBdev1", 00:31:14.683 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:14.683 "is_configured": true, 00:31:14.683 "data_offset": 0, 00:31:14.683 "data_size": 65536 00:31:14.683 }, 00:31:14.683 { 00:31:14.683 "name": null, 00:31:14.683 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:14.683 "is_configured": false, 00:31:14.683 "data_offset": 0, 00:31:14.683 "data_size": 65536 00:31:14.684 }, 00:31:14.684 { 00:31:14.684 "name": null, 00:31:14.684 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:14.684 "is_configured": false, 00:31:14.684 "data_offset": 0, 00:31:14.684 "data_size": 65536 00:31:14.684 } 00:31:14.684 ] 00:31:14.684 }' 00:31:14.684 17:27:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.684 17:27:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.942 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.942 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:14.942 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.942 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.942 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.200 [2024-11-26 17:27:45.073544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:15.200 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:15.201 "name": "Existed_Raid", 00:31:15.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.201 "strip_size_kb": 64, 00:31:15.201 "state": "configuring", 00:31:15.201 "raid_level": "raid5f", 00:31:15.201 "superblock": false, 00:31:15.201 "num_base_bdevs": 3, 00:31:15.201 "num_base_bdevs_discovered": 2, 00:31:15.201 "num_base_bdevs_operational": 3, 00:31:15.201 "base_bdevs_list": [ 00:31:15.201 { 00:31:15.201 "name": "BaseBdev1", 00:31:15.201 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:15.201 "is_configured": true, 00:31:15.201 "data_offset": 0, 00:31:15.201 "data_size": 65536 00:31:15.201 }, 00:31:15.201 { 00:31:15.201 "name": null, 00:31:15.201 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:15.201 "is_configured": false, 00:31:15.201 "data_offset": 0, 00:31:15.201 "data_size": 65536 00:31:15.201 }, 00:31:15.201 { 00:31:15.201 "name": "BaseBdev3", 00:31:15.201 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:15.201 "is_configured": true, 00:31:15.201 "data_offset": 0, 00:31:15.201 "data_size": 65536 00:31:15.201 } 00:31:15.201 ] 00:31:15.201 }' 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:15.201 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.459 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.459 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.460 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.460 [2024-11-26 17:27:45.552853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:15.722 "name": "Existed_Raid", 00:31:15.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.722 "strip_size_kb": 64, 00:31:15.722 "state": "configuring", 00:31:15.722 "raid_level": "raid5f", 00:31:15.722 "superblock": false, 00:31:15.722 "num_base_bdevs": 3, 00:31:15.722 "num_base_bdevs_discovered": 1, 00:31:15.722 "num_base_bdevs_operational": 3, 00:31:15.722 "base_bdevs_list": [ 00:31:15.722 { 00:31:15.722 "name": null, 00:31:15.722 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:15.722 "is_configured": false, 00:31:15.722 "data_offset": 0, 00:31:15.722 "data_size": 65536 00:31:15.722 }, 00:31:15.722 { 00:31:15.722 "name": null, 00:31:15.722 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:15.722 "is_configured": false, 00:31:15.722 "data_offset": 0, 00:31:15.722 "data_size": 65536 00:31:15.722 }, 00:31:15.722 { 00:31:15.722 "name": "BaseBdev3", 00:31:15.722 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:15.722 "is_configured": true, 00:31:15.722 "data_offset": 0, 00:31:15.722 "data_size": 65536 00:31:15.722 } 00:31:15.722 ] 00:31:15.722 }' 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:15.722 17:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.291 [2024-11-26 17:27:46.162639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.291 "name": "Existed_Raid", 00:31:16.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.291 "strip_size_kb": 64, 00:31:16.291 "state": "configuring", 00:31:16.291 "raid_level": "raid5f", 00:31:16.291 "superblock": false, 00:31:16.291 "num_base_bdevs": 3, 00:31:16.291 "num_base_bdevs_discovered": 2, 00:31:16.291 "num_base_bdevs_operational": 3, 00:31:16.291 "base_bdevs_list": [ 00:31:16.291 { 00:31:16.291 "name": null, 00:31:16.291 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:16.291 "is_configured": false, 00:31:16.291 "data_offset": 0, 00:31:16.291 "data_size": 65536 00:31:16.291 }, 00:31:16.291 { 00:31:16.291 "name": "BaseBdev2", 00:31:16.291 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:16.291 "is_configured": true, 00:31:16.291 "data_offset": 0, 00:31:16.291 "data_size": 65536 00:31:16.291 }, 00:31:16.291 { 00:31:16.291 "name": "BaseBdev3", 00:31:16.291 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:16.291 "is_configured": true, 00:31:16.291 "data_offset": 0, 00:31:16.291 "data_size": 65536 00:31:16.291 } 00:31:16.291 ] 00:31:16.291 }' 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.291 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.549 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb127a82-c538-4eaa-bf30-9294f07c4f02 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.935 [2024-11-26 17:27:46.705385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:16.935 [2024-11-26 17:27:46.705630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:16.935 [2024-11-26 17:27:46.705660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:16.935 [2024-11-26 17:27:46.705997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:16.935 NewBaseBdev 00:31:16.935 [2024-11-26 17:27:46.711672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:16.935 [2024-11-26 17:27:46.711697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:16.935 [2024-11-26 17:27:46.712004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.935 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.936 [ 00:31:16.936 { 00:31:16.936 "name": "NewBaseBdev", 00:31:16.936 "aliases": [ 00:31:16.936 "fb127a82-c538-4eaa-bf30-9294f07c4f02" 00:31:16.936 ], 00:31:16.936 "product_name": "Malloc disk", 00:31:16.936 "block_size": 512, 00:31:16.936 "num_blocks": 65536, 00:31:16.936 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:16.936 "assigned_rate_limits": { 00:31:16.936 "rw_ios_per_sec": 0, 00:31:16.936 "rw_mbytes_per_sec": 0, 00:31:16.936 "r_mbytes_per_sec": 0, 00:31:16.936 "w_mbytes_per_sec": 0 00:31:16.936 }, 00:31:16.936 "claimed": true, 00:31:16.936 "claim_type": "exclusive_write", 00:31:16.936 "zoned": false, 00:31:16.936 "supported_io_types": { 00:31:16.936 "read": true, 00:31:16.936 "write": true, 00:31:16.936 "unmap": true, 00:31:16.936 "flush": true, 00:31:16.936 "reset": true, 00:31:16.936 "nvme_admin": false, 00:31:16.936 "nvme_io": false, 00:31:16.936 "nvme_io_md": false, 00:31:16.936 "write_zeroes": true, 00:31:16.936 "zcopy": true, 00:31:16.936 "get_zone_info": false, 00:31:16.936 "zone_management": false, 00:31:16.936 "zone_append": false, 00:31:16.936 "compare": false, 00:31:16.936 "compare_and_write": false, 00:31:16.936 "abort": true, 00:31:16.936 "seek_hole": false, 00:31:16.936 "seek_data": false, 00:31:16.936 "copy": true, 00:31:16.936 "nvme_iov_md": false 00:31:16.936 }, 00:31:16.936 "memory_domains": [ 00:31:16.936 { 00:31:16.936 "dma_device_id": "system", 00:31:16.936 "dma_device_type": 1 00:31:16.936 }, 00:31:16.936 { 00:31:16.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.936 "dma_device_type": 2 00:31:16.936 } 00:31:16.936 ], 00:31:16.936 "driver_specific": {} 00:31:16.936 } 00:31:16.936 ] 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.936 "name": "Existed_Raid", 00:31:16.936 "uuid": "2a1433d6-b1f3-45ae-be1b-016a820c1e5e", 00:31:16.936 "strip_size_kb": 64, 00:31:16.936 "state": "online", 00:31:16.936 "raid_level": "raid5f", 00:31:16.936 "superblock": false, 00:31:16.936 "num_base_bdevs": 3, 00:31:16.936 "num_base_bdevs_discovered": 3, 00:31:16.936 "num_base_bdevs_operational": 3, 00:31:16.936 "base_bdevs_list": [ 00:31:16.936 { 00:31:16.936 "name": "NewBaseBdev", 00:31:16.936 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:16.936 "is_configured": true, 00:31:16.936 "data_offset": 0, 00:31:16.936 "data_size": 65536 00:31:16.936 }, 00:31:16.936 { 00:31:16.936 "name": "BaseBdev2", 00:31:16.936 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:16.936 "is_configured": true, 00:31:16.936 "data_offset": 0, 00:31:16.936 "data_size": 65536 00:31:16.936 }, 00:31:16.936 { 00:31:16.936 "name": "BaseBdev3", 00:31:16.936 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:16.936 "is_configured": true, 00:31:16.936 "data_offset": 0, 00:31:16.936 "data_size": 65536 00:31:16.936 } 00:31:16.936 ] 00:31:16.936 }' 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.936 17:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.194 [2024-11-26 17:27:47.242495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.194 "name": "Existed_Raid", 00:31:17.194 "aliases": [ 00:31:17.194 "2a1433d6-b1f3-45ae-be1b-016a820c1e5e" 00:31:17.194 ], 00:31:17.194 "product_name": "Raid Volume", 00:31:17.194 "block_size": 512, 00:31:17.194 "num_blocks": 131072, 00:31:17.194 "uuid": "2a1433d6-b1f3-45ae-be1b-016a820c1e5e", 00:31:17.194 "assigned_rate_limits": { 00:31:17.194 "rw_ios_per_sec": 0, 00:31:17.194 "rw_mbytes_per_sec": 0, 00:31:17.194 "r_mbytes_per_sec": 0, 00:31:17.194 "w_mbytes_per_sec": 0 00:31:17.194 }, 00:31:17.194 "claimed": false, 00:31:17.194 "zoned": false, 00:31:17.194 "supported_io_types": { 00:31:17.194 "read": true, 00:31:17.194 "write": true, 00:31:17.194 "unmap": false, 00:31:17.194 "flush": false, 00:31:17.194 "reset": true, 00:31:17.194 "nvme_admin": false, 00:31:17.194 "nvme_io": false, 00:31:17.194 "nvme_io_md": false, 00:31:17.194 "write_zeroes": true, 00:31:17.194 "zcopy": false, 00:31:17.194 "get_zone_info": false, 00:31:17.194 "zone_management": false, 00:31:17.194 "zone_append": false, 00:31:17.194 "compare": false, 00:31:17.194 "compare_and_write": false, 00:31:17.194 "abort": false, 00:31:17.194 "seek_hole": false, 00:31:17.194 "seek_data": false, 00:31:17.194 "copy": false, 00:31:17.194 "nvme_iov_md": false 00:31:17.194 }, 00:31:17.194 "driver_specific": { 00:31:17.194 "raid": { 00:31:17.194 "uuid": "2a1433d6-b1f3-45ae-be1b-016a820c1e5e", 00:31:17.194 "strip_size_kb": 64, 00:31:17.194 "state": "online", 00:31:17.194 "raid_level": "raid5f", 00:31:17.194 "superblock": false, 00:31:17.194 "num_base_bdevs": 3, 00:31:17.194 "num_base_bdevs_discovered": 3, 00:31:17.194 "num_base_bdevs_operational": 3, 00:31:17.194 "base_bdevs_list": [ 00:31:17.194 { 00:31:17.194 "name": "NewBaseBdev", 00:31:17.194 "uuid": "fb127a82-c538-4eaa-bf30-9294f07c4f02", 00:31:17.194 "is_configured": true, 00:31:17.194 "data_offset": 0, 00:31:17.194 "data_size": 65536 00:31:17.194 }, 00:31:17.194 { 00:31:17.194 "name": "BaseBdev2", 00:31:17.194 "uuid": "02313e3d-6dd1-4416-a011-dc392fe127ac", 00:31:17.194 "is_configured": true, 00:31:17.194 "data_offset": 0, 00:31:17.194 "data_size": 65536 00:31:17.194 }, 00:31:17.194 { 00:31:17.194 "name": "BaseBdev3", 00:31:17.194 "uuid": "62c21cd9-4c0e-4aac-98d5-d9f0cb9ed52e", 00:31:17.194 "is_configured": true, 00:31:17.194 "data_offset": 0, 00:31:17.194 "data_size": 65536 00:31:17.194 } 00:31:17.194 ] 00:31:17.194 } 00:31:17.194 } 00:31:17.194 }' 00:31:17.194 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:17.454 BaseBdev2 00:31:17.454 BaseBdev3' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.454 [2024-11-26 17:27:47.525863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:17.454 [2024-11-26 17:27:47.526011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.454 [2024-11-26 17:27:47.526245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.454 [2024-11-26 17:27:47.526604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.454 [2024-11-26 17:27:47.526629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80046 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80046 ']' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80046 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.454 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80046 00:31:17.712 killing process with pid 80046 00:31:17.712 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.712 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.712 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80046' 00:31:17.712 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80046 00:31:17.712 [2024-11-26 17:27:47.576387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:17.712 17:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80046 00:31:17.971 [2024-11-26 17:27:47.888017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:19.350 00:31:19.350 real 0m10.771s 00:31:19.350 user 0m16.865s 00:31:19.350 sys 0m2.392s 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.350 ************************************ 00:31:19.350 END TEST raid5f_state_function_test 00:31:19.350 ************************************ 00:31:19.350 17:27:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:31:19.350 17:27:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:19.350 17:27:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.350 17:27:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:19.350 ************************************ 00:31:19.350 START TEST raid5f_state_function_test_sb 00:31:19.350 ************************************ 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:19.350 Process raid pid: 80669 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80669 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80669' 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80669 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80669 ']' 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.350 17:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.350 [2024-11-26 17:27:49.272283] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:31:19.350 [2024-11-26 17:27:49.272433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.350 [2024-11-26 17:27:49.456556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.609 [2024-11-26 17:27:49.605914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.868 [2024-11-26 17:27:49.854476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:19.868 [2024-11-26 17:27:49.854537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.127 [2024-11-26 17:27:50.154686] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:20.127 [2024-11-26 17:27:50.154911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:20.127 [2024-11-26 17:27:50.155010] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:20.127 [2024-11-26 17:27:50.155058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:20.127 [2024-11-26 17:27:50.155229] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:20.127 [2024-11-26 17:27:50.155275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.127 "name": "Existed_Raid", 00:31:20.127 "uuid": "2758c1ee-61a4-454a-8d24-bbb511ed0657", 00:31:20.127 "strip_size_kb": 64, 00:31:20.127 "state": "configuring", 00:31:20.127 "raid_level": "raid5f", 00:31:20.127 "superblock": true, 00:31:20.127 "num_base_bdevs": 3, 00:31:20.127 "num_base_bdevs_discovered": 0, 00:31:20.127 "num_base_bdevs_operational": 3, 00:31:20.127 "base_bdevs_list": [ 00:31:20.127 { 00:31:20.127 "name": "BaseBdev1", 00:31:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.127 "is_configured": false, 00:31:20.127 "data_offset": 0, 00:31:20.127 "data_size": 0 00:31:20.127 }, 00:31:20.127 { 00:31:20.127 "name": "BaseBdev2", 00:31:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.127 "is_configured": false, 00:31:20.127 "data_offset": 0, 00:31:20.127 "data_size": 0 00:31:20.127 }, 00:31:20.127 { 00:31:20.127 "name": "BaseBdev3", 00:31:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.127 "is_configured": false, 00:31:20.127 "data_offset": 0, 00:31:20.127 "data_size": 0 00:31:20.127 } 00:31:20.127 ] 00:31:20.127 }' 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.127 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 [2024-11-26 17:27:50.597957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:20.694 [2024-11-26 17:27:50.598002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 [2024-11-26 17:27:50.605946] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:20.694 [2024-11-26 17:27:50.606112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:20.694 [2024-11-26 17:27:50.606216] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:20.694 [2024-11-26 17:27:50.606262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:20.694 [2024-11-26 17:27:50.606291] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:20.694 [2024-11-26 17:27:50.606324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 BaseBdev1 00:31:20.694 [2024-11-26 17:27:50.655722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.694 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 [ 00:31:20.694 { 00:31:20.694 "name": "BaseBdev1", 00:31:20.694 "aliases": [ 00:31:20.694 "0504b252-5220-48d7-93d3-c1ac2ff47a54" 00:31:20.695 ], 00:31:20.695 "product_name": "Malloc disk", 00:31:20.695 "block_size": 512, 00:31:20.695 "num_blocks": 65536, 00:31:20.695 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:20.695 "assigned_rate_limits": { 00:31:20.695 "rw_ios_per_sec": 0, 00:31:20.695 "rw_mbytes_per_sec": 0, 00:31:20.695 "r_mbytes_per_sec": 0, 00:31:20.695 "w_mbytes_per_sec": 0 00:31:20.695 }, 00:31:20.695 "claimed": true, 00:31:20.695 "claim_type": "exclusive_write", 00:31:20.695 "zoned": false, 00:31:20.695 "supported_io_types": { 00:31:20.695 "read": true, 00:31:20.695 "write": true, 00:31:20.695 "unmap": true, 00:31:20.695 "flush": true, 00:31:20.695 "reset": true, 00:31:20.695 "nvme_admin": false, 00:31:20.695 "nvme_io": false, 00:31:20.695 "nvme_io_md": false, 00:31:20.695 "write_zeroes": true, 00:31:20.695 "zcopy": true, 00:31:20.695 "get_zone_info": false, 00:31:20.695 "zone_management": false, 00:31:20.695 "zone_append": false, 00:31:20.695 "compare": false, 00:31:20.695 "compare_and_write": false, 00:31:20.695 "abort": true, 00:31:20.695 "seek_hole": false, 00:31:20.695 "seek_data": false, 00:31:20.695 "copy": true, 00:31:20.695 "nvme_iov_md": false 00:31:20.695 }, 00:31:20.695 "memory_domains": [ 00:31:20.695 { 00:31:20.695 "dma_device_id": "system", 00:31:20.695 "dma_device_type": 1 00:31:20.695 }, 00:31:20.695 { 00:31:20.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.695 "dma_device_type": 2 00:31:20.695 } 00:31:20.695 ], 00:31:20.695 "driver_specific": {} 00:31:20.695 } 00:31:20.695 ] 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.695 "name": "Existed_Raid", 00:31:20.695 "uuid": "15574e57-ebac-4e8b-a563-874dcfa1a6d5", 00:31:20.695 "strip_size_kb": 64, 00:31:20.695 "state": "configuring", 00:31:20.695 "raid_level": "raid5f", 00:31:20.695 "superblock": true, 00:31:20.695 "num_base_bdevs": 3, 00:31:20.695 "num_base_bdevs_discovered": 1, 00:31:20.695 "num_base_bdevs_operational": 3, 00:31:20.695 "base_bdevs_list": [ 00:31:20.695 { 00:31:20.695 "name": "BaseBdev1", 00:31:20.695 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:20.695 "is_configured": true, 00:31:20.695 "data_offset": 2048, 00:31:20.695 "data_size": 63488 00:31:20.695 }, 00:31:20.695 { 00:31:20.695 "name": "BaseBdev2", 00:31:20.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.695 "is_configured": false, 00:31:20.695 "data_offset": 0, 00:31:20.695 "data_size": 0 00:31:20.695 }, 00:31:20.695 { 00:31:20.695 "name": "BaseBdev3", 00:31:20.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.695 "is_configured": false, 00:31:20.695 "data_offset": 0, 00:31:20.695 "data_size": 0 00:31:20.695 } 00:31:20.695 ] 00:31:20.695 }' 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.695 17:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 [2024-11-26 17:27:51.139278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:21.273 [2024-11-26 17:27:51.139347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 [2024-11-26 17:27:51.151329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:21.273 [2024-11-26 17:27:51.153678] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:21.273 [2024-11-26 17:27:51.153727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:21.273 [2024-11-26 17:27:51.153739] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:21.273 [2024-11-26 17:27:51.153752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.273 "name": "Existed_Raid", 00:31:21.273 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:21.273 "strip_size_kb": 64, 00:31:21.273 "state": "configuring", 00:31:21.273 "raid_level": "raid5f", 00:31:21.273 "superblock": true, 00:31:21.273 "num_base_bdevs": 3, 00:31:21.273 "num_base_bdevs_discovered": 1, 00:31:21.273 "num_base_bdevs_operational": 3, 00:31:21.273 "base_bdevs_list": [ 00:31:21.273 { 00:31:21.273 "name": "BaseBdev1", 00:31:21.273 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:21.273 "is_configured": true, 00:31:21.273 "data_offset": 2048, 00:31:21.273 "data_size": 63488 00:31:21.273 }, 00:31:21.273 { 00:31:21.273 "name": "BaseBdev2", 00:31:21.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.273 "is_configured": false, 00:31:21.273 "data_offset": 0, 00:31:21.273 "data_size": 0 00:31:21.273 }, 00:31:21.273 { 00:31:21.273 "name": "BaseBdev3", 00:31:21.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.273 "is_configured": false, 00:31:21.273 "data_offset": 0, 00:31:21.273 "data_size": 0 00:31:21.273 } 00:31:21.273 ] 00:31:21.273 }' 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.273 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.533 [2024-11-26 17:27:51.627029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:21.533 BaseBdev2 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.533 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.791 [ 00:31:21.791 { 00:31:21.791 "name": "BaseBdev2", 00:31:21.791 "aliases": [ 00:31:21.791 "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62" 00:31:21.791 ], 00:31:21.791 "product_name": "Malloc disk", 00:31:21.791 "block_size": 512, 00:31:21.791 "num_blocks": 65536, 00:31:21.791 "uuid": "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62", 00:31:21.791 "assigned_rate_limits": { 00:31:21.791 "rw_ios_per_sec": 0, 00:31:21.791 "rw_mbytes_per_sec": 0, 00:31:21.791 "r_mbytes_per_sec": 0, 00:31:21.791 "w_mbytes_per_sec": 0 00:31:21.791 }, 00:31:21.791 "claimed": true, 00:31:21.791 "claim_type": "exclusive_write", 00:31:21.791 "zoned": false, 00:31:21.791 "supported_io_types": { 00:31:21.791 "read": true, 00:31:21.791 "write": true, 00:31:21.791 "unmap": true, 00:31:21.791 "flush": true, 00:31:21.791 "reset": true, 00:31:21.791 "nvme_admin": false, 00:31:21.791 "nvme_io": false, 00:31:21.791 "nvme_io_md": false, 00:31:21.791 "write_zeroes": true, 00:31:21.791 "zcopy": true, 00:31:21.791 "get_zone_info": false, 00:31:21.791 "zone_management": false, 00:31:21.791 "zone_append": false, 00:31:21.791 "compare": false, 00:31:21.791 "compare_and_write": false, 00:31:21.791 "abort": true, 00:31:21.791 "seek_hole": false, 00:31:21.791 "seek_data": false, 00:31:21.791 "copy": true, 00:31:21.791 "nvme_iov_md": false 00:31:21.791 }, 00:31:21.791 "memory_domains": [ 00:31:21.791 { 00:31:21.791 "dma_device_id": "system", 00:31:21.791 "dma_device_type": 1 00:31:21.791 }, 00:31:21.791 { 00:31:21.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.791 "dma_device_type": 2 00:31:21.791 } 00:31:21.791 ], 00:31:21.791 "driver_specific": {} 00:31:21.791 } 00:31:21.791 ] 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.791 "name": "Existed_Raid", 00:31:21.791 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:21.791 "strip_size_kb": 64, 00:31:21.791 "state": "configuring", 00:31:21.791 "raid_level": "raid5f", 00:31:21.791 "superblock": true, 00:31:21.791 "num_base_bdevs": 3, 00:31:21.791 "num_base_bdevs_discovered": 2, 00:31:21.791 "num_base_bdevs_operational": 3, 00:31:21.791 "base_bdevs_list": [ 00:31:21.791 { 00:31:21.791 "name": "BaseBdev1", 00:31:21.791 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:21.791 "is_configured": true, 00:31:21.791 "data_offset": 2048, 00:31:21.791 "data_size": 63488 00:31:21.791 }, 00:31:21.791 { 00:31:21.791 "name": "BaseBdev2", 00:31:21.791 "uuid": "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62", 00:31:21.791 "is_configured": true, 00:31:21.791 "data_offset": 2048, 00:31:21.791 "data_size": 63488 00:31:21.791 }, 00:31:21.791 { 00:31:21.791 "name": "BaseBdev3", 00:31:21.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.791 "is_configured": false, 00:31:21.791 "data_offset": 0, 00:31:21.791 "data_size": 0 00:31:21.791 } 00:31:21.791 ] 00:31:21.791 }' 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.791 17:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.049 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:22.049 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.049 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.308 [2024-11-26 17:27:52.185660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:22.308 [2024-11-26 17:27:52.186255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:22.308 [2024-11-26 17:27:52.186289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:22.308 BaseBdev3 00:31:22.308 [2024-11-26 17:27:52.186766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.308 [2024-11-26 17:27:52.192933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:22.308 [2024-11-26 17:27:52.193081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:22.308 [2024-11-26 17:27:52.193405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.308 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.309 [ 00:31:22.309 { 00:31:22.309 "name": "BaseBdev3", 00:31:22.309 "aliases": [ 00:31:22.309 "fd5eda76-4ede-475b-a579-514f9a34b0d3" 00:31:22.309 ], 00:31:22.309 "product_name": "Malloc disk", 00:31:22.309 "block_size": 512, 00:31:22.309 "num_blocks": 65536, 00:31:22.309 "uuid": "fd5eda76-4ede-475b-a579-514f9a34b0d3", 00:31:22.309 "assigned_rate_limits": { 00:31:22.309 "rw_ios_per_sec": 0, 00:31:22.309 "rw_mbytes_per_sec": 0, 00:31:22.309 "r_mbytes_per_sec": 0, 00:31:22.309 "w_mbytes_per_sec": 0 00:31:22.309 }, 00:31:22.309 "claimed": true, 00:31:22.309 "claim_type": "exclusive_write", 00:31:22.309 "zoned": false, 00:31:22.309 "supported_io_types": { 00:31:22.309 "read": true, 00:31:22.309 "write": true, 00:31:22.309 "unmap": true, 00:31:22.309 "flush": true, 00:31:22.309 "reset": true, 00:31:22.309 "nvme_admin": false, 00:31:22.309 "nvme_io": false, 00:31:22.309 "nvme_io_md": false, 00:31:22.309 "write_zeroes": true, 00:31:22.309 "zcopy": true, 00:31:22.309 "get_zone_info": false, 00:31:22.309 "zone_management": false, 00:31:22.309 "zone_append": false, 00:31:22.309 "compare": false, 00:31:22.309 "compare_and_write": false, 00:31:22.309 "abort": true, 00:31:22.309 "seek_hole": false, 00:31:22.309 "seek_data": false, 00:31:22.309 "copy": true, 00:31:22.309 "nvme_iov_md": false 00:31:22.309 }, 00:31:22.309 "memory_domains": [ 00:31:22.309 { 00:31:22.309 "dma_device_id": "system", 00:31:22.309 "dma_device_type": 1 00:31:22.309 }, 00:31:22.309 { 00:31:22.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.309 "dma_device_type": 2 00:31:22.309 } 00:31:22.309 ], 00:31:22.309 "driver_specific": {} 00:31:22.309 } 00:31:22.309 ] 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.309 "name": "Existed_Raid", 00:31:22.309 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:22.309 "strip_size_kb": 64, 00:31:22.309 "state": "online", 00:31:22.309 "raid_level": "raid5f", 00:31:22.309 "superblock": true, 00:31:22.309 "num_base_bdevs": 3, 00:31:22.309 "num_base_bdevs_discovered": 3, 00:31:22.309 "num_base_bdevs_operational": 3, 00:31:22.309 "base_bdevs_list": [ 00:31:22.309 { 00:31:22.309 "name": "BaseBdev1", 00:31:22.309 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:22.309 "is_configured": true, 00:31:22.309 "data_offset": 2048, 00:31:22.309 "data_size": 63488 00:31:22.309 }, 00:31:22.309 { 00:31:22.309 "name": "BaseBdev2", 00:31:22.309 "uuid": "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62", 00:31:22.309 "is_configured": true, 00:31:22.309 "data_offset": 2048, 00:31:22.309 "data_size": 63488 00:31:22.309 }, 00:31:22.309 { 00:31:22.309 "name": "BaseBdev3", 00:31:22.309 "uuid": "fd5eda76-4ede-475b-a579-514f9a34b0d3", 00:31:22.309 "is_configured": true, 00:31:22.309 "data_offset": 2048, 00:31:22.309 "data_size": 63488 00:31:22.309 } 00:31:22.309 ] 00:31:22.309 }' 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.309 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.876 [2024-11-26 17:27:52.712011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.876 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.876 "name": "Existed_Raid", 00:31:22.876 "aliases": [ 00:31:22.876 "ccbe5021-6d16-44f0-a6a4-d850118d0d93" 00:31:22.876 ], 00:31:22.876 "product_name": "Raid Volume", 00:31:22.876 "block_size": 512, 00:31:22.876 "num_blocks": 126976, 00:31:22.876 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:22.876 "assigned_rate_limits": { 00:31:22.876 "rw_ios_per_sec": 0, 00:31:22.876 "rw_mbytes_per_sec": 0, 00:31:22.876 "r_mbytes_per_sec": 0, 00:31:22.876 "w_mbytes_per_sec": 0 00:31:22.876 }, 00:31:22.876 "claimed": false, 00:31:22.876 "zoned": false, 00:31:22.876 "supported_io_types": { 00:31:22.876 "read": true, 00:31:22.876 "write": true, 00:31:22.876 "unmap": false, 00:31:22.876 "flush": false, 00:31:22.876 "reset": true, 00:31:22.876 "nvme_admin": false, 00:31:22.876 "nvme_io": false, 00:31:22.876 "nvme_io_md": false, 00:31:22.876 "write_zeroes": true, 00:31:22.876 "zcopy": false, 00:31:22.876 "get_zone_info": false, 00:31:22.876 "zone_management": false, 00:31:22.876 "zone_append": false, 00:31:22.876 "compare": false, 00:31:22.876 "compare_and_write": false, 00:31:22.876 "abort": false, 00:31:22.876 "seek_hole": false, 00:31:22.876 "seek_data": false, 00:31:22.876 "copy": false, 00:31:22.876 "nvme_iov_md": false 00:31:22.876 }, 00:31:22.876 "driver_specific": { 00:31:22.876 "raid": { 00:31:22.876 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:22.876 "strip_size_kb": 64, 00:31:22.876 "state": "online", 00:31:22.876 "raid_level": "raid5f", 00:31:22.876 "superblock": true, 00:31:22.877 "num_base_bdevs": 3, 00:31:22.877 "num_base_bdevs_discovered": 3, 00:31:22.877 "num_base_bdevs_operational": 3, 00:31:22.877 "base_bdevs_list": [ 00:31:22.877 { 00:31:22.877 "name": "BaseBdev1", 00:31:22.877 "uuid": "0504b252-5220-48d7-93d3-c1ac2ff47a54", 00:31:22.877 "is_configured": true, 00:31:22.877 "data_offset": 2048, 00:31:22.877 "data_size": 63488 00:31:22.877 }, 00:31:22.877 { 00:31:22.877 "name": "BaseBdev2", 00:31:22.877 "uuid": "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62", 00:31:22.877 "is_configured": true, 00:31:22.877 "data_offset": 2048, 00:31:22.877 "data_size": 63488 00:31:22.877 }, 00:31:22.877 { 00:31:22.877 "name": "BaseBdev3", 00:31:22.877 "uuid": "fd5eda76-4ede-475b-a579-514f9a34b0d3", 00:31:22.877 "is_configured": true, 00:31:22.877 "data_offset": 2048, 00:31:22.877 "data_size": 63488 00:31:22.877 } 00:31:22.877 ] 00:31:22.877 } 00:31:22.877 } 00:31:22.877 }' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:22.877 BaseBdev2 00:31:22.877 BaseBdev3' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.877 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.136 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:23.136 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:23.136 17:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:23.136 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.136 17:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.136 [2024-11-26 17:27:53.003486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:23.136 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.137 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.137 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.137 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:23.137 "name": "Existed_Raid", 00:31:23.137 "uuid": "ccbe5021-6d16-44f0-a6a4-d850118d0d93", 00:31:23.137 "strip_size_kb": 64, 00:31:23.137 "state": "online", 00:31:23.137 "raid_level": "raid5f", 00:31:23.137 "superblock": true, 00:31:23.137 "num_base_bdevs": 3, 00:31:23.137 "num_base_bdevs_discovered": 2, 00:31:23.137 "num_base_bdevs_operational": 2, 00:31:23.137 "base_bdevs_list": [ 00:31:23.137 { 00:31:23.137 "name": null, 00:31:23.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.137 "is_configured": false, 00:31:23.137 "data_offset": 0, 00:31:23.137 "data_size": 63488 00:31:23.137 }, 00:31:23.137 { 00:31:23.137 "name": "BaseBdev2", 00:31:23.137 "uuid": "e76f8c7a-5a56-4a9d-9b4a-fe645b210a62", 00:31:23.137 "is_configured": true, 00:31:23.137 "data_offset": 2048, 00:31:23.137 "data_size": 63488 00:31:23.137 }, 00:31:23.137 { 00:31:23.137 "name": "BaseBdev3", 00:31:23.137 "uuid": "fd5eda76-4ede-475b-a579-514f9a34b0d3", 00:31:23.137 "is_configured": true, 00:31:23.137 "data_offset": 2048, 00:31:23.137 "data_size": 63488 00:31:23.137 } 00:31:23.137 ] 00:31:23.137 }' 00:31:23.137 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:23.137 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.704 [2024-11-26 17:27:53.613395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:23.704 [2024-11-26 17:27:53.613792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:23.704 [2024-11-26 17:27:53.718083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.704 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.704 [2024-11-26 17:27:53.790056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:23.705 [2024-11-26 17:27:53.790287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:23.963 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.963 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.964 BaseBdev2 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.964 17:27:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.964 [ 00:31:23.964 { 00:31:23.964 "name": "BaseBdev2", 00:31:23.964 "aliases": [ 00:31:23.964 "a5e1fe67-b64b-44af-8c82-b28ac4248bf3" 00:31:23.964 ], 00:31:23.964 "product_name": "Malloc disk", 00:31:23.964 "block_size": 512, 00:31:23.964 "num_blocks": 65536, 00:31:23.964 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:23.964 "assigned_rate_limits": { 00:31:23.964 "rw_ios_per_sec": 0, 00:31:23.964 "rw_mbytes_per_sec": 0, 00:31:23.964 "r_mbytes_per_sec": 0, 00:31:23.964 "w_mbytes_per_sec": 0 00:31:23.964 }, 00:31:23.964 "claimed": false, 00:31:23.964 "zoned": false, 00:31:23.964 "supported_io_types": { 00:31:23.964 "read": true, 00:31:23.964 "write": true, 00:31:23.964 "unmap": true, 00:31:23.964 "flush": true, 00:31:23.964 "reset": true, 00:31:23.964 "nvme_admin": false, 00:31:23.964 "nvme_io": false, 00:31:23.964 "nvme_io_md": false, 00:31:23.964 "write_zeroes": true, 00:31:23.964 "zcopy": true, 00:31:23.964 "get_zone_info": false, 00:31:23.964 "zone_management": false, 00:31:23.964 "zone_append": false, 00:31:23.964 "compare": false, 00:31:23.964 "compare_and_write": false, 00:31:23.964 "abort": true, 00:31:23.964 "seek_hole": false, 00:31:23.964 "seek_data": false, 00:31:23.964 "copy": true, 00:31:23.964 "nvme_iov_md": false 00:31:23.964 }, 00:31:23.964 "memory_domains": [ 00:31:23.964 { 00:31:23.964 "dma_device_id": "system", 00:31:23.964 "dma_device_type": 1 00:31:23.964 }, 00:31:23.964 { 00:31:23.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.964 "dma_device_type": 2 00:31:23.964 } 00:31:23.964 ], 00:31:23.964 "driver_specific": {} 00:31:23.964 } 00:31:23.964 ] 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.964 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.224 BaseBdev3 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.224 [ 00:31:24.224 { 00:31:24.224 "name": "BaseBdev3", 00:31:24.224 "aliases": [ 00:31:24.224 "37538f20-987a-4486-b53c-982ad50e50a0" 00:31:24.224 ], 00:31:24.224 "product_name": "Malloc disk", 00:31:24.224 "block_size": 512, 00:31:24.224 "num_blocks": 65536, 00:31:24.224 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:24.224 "assigned_rate_limits": { 00:31:24.224 "rw_ios_per_sec": 0, 00:31:24.224 "rw_mbytes_per_sec": 0, 00:31:24.224 "r_mbytes_per_sec": 0, 00:31:24.224 "w_mbytes_per_sec": 0 00:31:24.224 }, 00:31:24.224 "claimed": false, 00:31:24.224 "zoned": false, 00:31:24.224 "supported_io_types": { 00:31:24.224 "read": true, 00:31:24.224 "write": true, 00:31:24.224 "unmap": true, 00:31:24.224 "flush": true, 00:31:24.224 "reset": true, 00:31:24.224 "nvme_admin": false, 00:31:24.224 "nvme_io": false, 00:31:24.224 "nvme_io_md": false, 00:31:24.224 "write_zeroes": true, 00:31:24.224 "zcopy": true, 00:31:24.224 "get_zone_info": false, 00:31:24.224 "zone_management": false, 00:31:24.224 "zone_append": false, 00:31:24.224 "compare": false, 00:31:24.224 "compare_and_write": false, 00:31:24.224 "abort": true, 00:31:24.224 "seek_hole": false, 00:31:24.224 "seek_data": false, 00:31:24.224 "copy": true, 00:31:24.224 "nvme_iov_md": false 00:31:24.224 }, 00:31:24.224 "memory_domains": [ 00:31:24.224 { 00:31:24.224 "dma_device_id": "system", 00:31:24.224 "dma_device_type": 1 00:31:24.224 }, 00:31:24.224 { 00:31:24.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.224 "dma_device_type": 2 00:31:24.224 } 00:31:24.224 ], 00:31:24.224 "driver_specific": {} 00:31:24.224 } 00:31:24.224 ] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.224 [2024-11-26 17:27:54.145196] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:24.224 [2024-11-26 17:27:54.145382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:24.224 [2024-11-26 17:27:54.145530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:24.224 [2024-11-26 17:27:54.147906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.224 "name": "Existed_Raid", 00:31:24.224 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:24.224 "strip_size_kb": 64, 00:31:24.224 "state": "configuring", 00:31:24.224 "raid_level": "raid5f", 00:31:24.224 "superblock": true, 00:31:24.224 "num_base_bdevs": 3, 00:31:24.224 "num_base_bdevs_discovered": 2, 00:31:24.224 "num_base_bdevs_operational": 3, 00:31:24.224 "base_bdevs_list": [ 00:31:24.224 { 00:31:24.224 "name": "BaseBdev1", 00:31:24.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.224 "is_configured": false, 00:31:24.224 "data_offset": 0, 00:31:24.224 "data_size": 0 00:31:24.224 }, 00:31:24.224 { 00:31:24.224 "name": "BaseBdev2", 00:31:24.224 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:24.224 "is_configured": true, 00:31:24.224 "data_offset": 2048, 00:31:24.224 "data_size": 63488 00:31:24.224 }, 00:31:24.224 { 00:31:24.224 "name": "BaseBdev3", 00:31:24.224 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:24.224 "is_configured": true, 00:31:24.224 "data_offset": 2048, 00:31:24.224 "data_size": 63488 00:31:24.224 } 00:31:24.224 ] 00:31:24.224 }' 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.224 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.793 [2024-11-26 17:27:54.624525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.793 "name": "Existed_Raid", 00:31:24.793 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:24.793 "strip_size_kb": 64, 00:31:24.793 "state": "configuring", 00:31:24.793 "raid_level": "raid5f", 00:31:24.793 "superblock": true, 00:31:24.793 "num_base_bdevs": 3, 00:31:24.793 "num_base_bdevs_discovered": 1, 00:31:24.793 "num_base_bdevs_operational": 3, 00:31:24.793 "base_bdevs_list": [ 00:31:24.793 { 00:31:24.793 "name": "BaseBdev1", 00:31:24.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.793 "is_configured": false, 00:31:24.793 "data_offset": 0, 00:31:24.793 "data_size": 0 00:31:24.793 }, 00:31:24.793 { 00:31:24.793 "name": null, 00:31:24.793 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:24.793 "is_configured": false, 00:31:24.793 "data_offset": 0, 00:31:24.793 "data_size": 63488 00:31:24.793 }, 00:31:24.793 { 00:31:24.793 "name": "BaseBdev3", 00:31:24.793 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:24.793 "is_configured": true, 00:31:24.793 "data_offset": 2048, 00:31:24.793 "data_size": 63488 00:31:24.793 } 00:31:24.793 ] 00:31:24.793 }' 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.793 17:27:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.052 [2024-11-26 17:27:55.119487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:25.052 BaseBdev1 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.052 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.052 [ 00:31:25.052 { 00:31:25.052 "name": "BaseBdev1", 00:31:25.052 "aliases": [ 00:31:25.052 "b6215dcd-e44a-49f2-8d06-84a9694417db" 00:31:25.052 ], 00:31:25.052 "product_name": "Malloc disk", 00:31:25.052 "block_size": 512, 00:31:25.052 "num_blocks": 65536, 00:31:25.052 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:25.052 "assigned_rate_limits": { 00:31:25.052 "rw_ios_per_sec": 0, 00:31:25.052 "rw_mbytes_per_sec": 0, 00:31:25.052 "r_mbytes_per_sec": 0, 00:31:25.052 "w_mbytes_per_sec": 0 00:31:25.052 }, 00:31:25.052 "claimed": true, 00:31:25.052 "claim_type": "exclusive_write", 00:31:25.052 "zoned": false, 00:31:25.052 "supported_io_types": { 00:31:25.052 "read": true, 00:31:25.052 "write": true, 00:31:25.052 "unmap": true, 00:31:25.052 "flush": true, 00:31:25.052 "reset": true, 00:31:25.052 "nvme_admin": false, 00:31:25.052 "nvme_io": false, 00:31:25.052 "nvme_io_md": false, 00:31:25.052 "write_zeroes": true, 00:31:25.052 "zcopy": true, 00:31:25.052 "get_zone_info": false, 00:31:25.052 "zone_management": false, 00:31:25.052 "zone_append": false, 00:31:25.052 "compare": false, 00:31:25.052 "compare_and_write": false, 00:31:25.052 "abort": true, 00:31:25.052 "seek_hole": false, 00:31:25.052 "seek_data": false, 00:31:25.052 "copy": true, 00:31:25.052 "nvme_iov_md": false 00:31:25.052 }, 00:31:25.052 "memory_domains": [ 00:31:25.052 { 00:31:25.052 "dma_device_id": "system", 00:31:25.052 "dma_device_type": 1 00:31:25.052 }, 00:31:25.052 { 00:31:25.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.052 "dma_device_type": 2 00:31:25.337 } 00:31:25.337 ], 00:31:25.337 "driver_specific": {} 00:31:25.337 } 00:31:25.337 ] 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.337 "name": "Existed_Raid", 00:31:25.337 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:25.337 "strip_size_kb": 64, 00:31:25.337 "state": "configuring", 00:31:25.337 "raid_level": "raid5f", 00:31:25.337 "superblock": true, 00:31:25.337 "num_base_bdevs": 3, 00:31:25.337 "num_base_bdevs_discovered": 2, 00:31:25.337 "num_base_bdevs_operational": 3, 00:31:25.337 "base_bdevs_list": [ 00:31:25.337 { 00:31:25.337 "name": "BaseBdev1", 00:31:25.337 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:25.337 "is_configured": true, 00:31:25.337 "data_offset": 2048, 00:31:25.337 "data_size": 63488 00:31:25.337 }, 00:31:25.337 { 00:31:25.337 "name": null, 00:31:25.337 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:25.337 "is_configured": false, 00:31:25.337 "data_offset": 0, 00:31:25.337 "data_size": 63488 00:31:25.337 }, 00:31:25.337 { 00:31:25.337 "name": "BaseBdev3", 00:31:25.337 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:25.337 "is_configured": true, 00:31:25.337 "data_offset": 2048, 00:31:25.337 "data_size": 63488 00:31:25.337 } 00:31:25.337 ] 00:31:25.337 }' 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.337 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.596 [2024-11-26 17:27:55.654802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.596 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.596 "name": "Existed_Raid", 00:31:25.597 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:25.597 "strip_size_kb": 64, 00:31:25.597 "state": "configuring", 00:31:25.597 "raid_level": "raid5f", 00:31:25.597 "superblock": true, 00:31:25.597 "num_base_bdevs": 3, 00:31:25.597 "num_base_bdevs_discovered": 1, 00:31:25.597 "num_base_bdevs_operational": 3, 00:31:25.597 "base_bdevs_list": [ 00:31:25.597 { 00:31:25.597 "name": "BaseBdev1", 00:31:25.597 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:25.597 "is_configured": true, 00:31:25.597 "data_offset": 2048, 00:31:25.597 "data_size": 63488 00:31:25.597 }, 00:31:25.597 { 00:31:25.597 "name": null, 00:31:25.597 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:25.597 "is_configured": false, 00:31:25.597 "data_offset": 0, 00:31:25.597 "data_size": 63488 00:31:25.597 }, 00:31:25.597 { 00:31:25.597 "name": null, 00:31:25.597 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:25.597 "is_configured": false, 00:31:25.597 "data_offset": 0, 00:31:25.597 "data_size": 63488 00:31:25.597 } 00:31:25.597 ] 00:31:25.597 }' 00:31:25.855 17:27:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.855 17:27:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:26.114 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 [2024-11-26 17:27:56.106278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.115 "name": "Existed_Raid", 00:31:26.115 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:26.115 "strip_size_kb": 64, 00:31:26.115 "state": "configuring", 00:31:26.115 "raid_level": "raid5f", 00:31:26.115 "superblock": true, 00:31:26.115 "num_base_bdevs": 3, 00:31:26.115 "num_base_bdevs_discovered": 2, 00:31:26.115 "num_base_bdevs_operational": 3, 00:31:26.115 "base_bdevs_list": [ 00:31:26.115 { 00:31:26.115 "name": "BaseBdev1", 00:31:26.115 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:26.115 "is_configured": true, 00:31:26.115 "data_offset": 2048, 00:31:26.115 "data_size": 63488 00:31:26.115 }, 00:31:26.115 { 00:31:26.115 "name": null, 00:31:26.115 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:26.115 "is_configured": false, 00:31:26.115 "data_offset": 0, 00:31:26.115 "data_size": 63488 00:31:26.115 }, 00:31:26.115 { 00:31:26.115 "name": "BaseBdev3", 00:31:26.115 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:26.115 "is_configured": true, 00:31:26.115 "data_offset": 2048, 00:31:26.115 "data_size": 63488 00:31:26.115 } 00:31:26.115 ] 00:31:26.115 }' 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.115 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.682 [2024-11-26 17:27:56.589822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.682 "name": "Existed_Raid", 00:31:26.682 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:26.682 "strip_size_kb": 64, 00:31:26.682 "state": "configuring", 00:31:26.682 "raid_level": "raid5f", 00:31:26.682 "superblock": true, 00:31:26.682 "num_base_bdevs": 3, 00:31:26.682 "num_base_bdevs_discovered": 1, 00:31:26.682 "num_base_bdevs_operational": 3, 00:31:26.682 "base_bdevs_list": [ 00:31:26.682 { 00:31:26.682 "name": null, 00:31:26.682 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:26.682 "is_configured": false, 00:31:26.682 "data_offset": 0, 00:31:26.682 "data_size": 63488 00:31:26.682 }, 00:31:26.682 { 00:31:26.682 "name": null, 00:31:26.682 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:26.682 "is_configured": false, 00:31:26.682 "data_offset": 0, 00:31:26.682 "data_size": 63488 00:31:26.682 }, 00:31:26.682 { 00:31:26.682 "name": "BaseBdev3", 00:31:26.682 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:26.682 "is_configured": true, 00:31:26.682 "data_offset": 2048, 00:31:26.682 "data_size": 63488 00:31:26.682 } 00:31:26.682 ] 00:31:26.682 }' 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.682 17:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.248 [2024-11-26 17:27:57.148918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.248 "name": "Existed_Raid", 00:31:27.248 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:27.248 "strip_size_kb": 64, 00:31:27.248 "state": "configuring", 00:31:27.248 "raid_level": "raid5f", 00:31:27.248 "superblock": true, 00:31:27.248 "num_base_bdevs": 3, 00:31:27.248 "num_base_bdevs_discovered": 2, 00:31:27.248 "num_base_bdevs_operational": 3, 00:31:27.248 "base_bdevs_list": [ 00:31:27.248 { 00:31:27.248 "name": null, 00:31:27.248 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:27.248 "is_configured": false, 00:31:27.248 "data_offset": 0, 00:31:27.248 "data_size": 63488 00:31:27.248 }, 00:31:27.248 { 00:31:27.248 "name": "BaseBdev2", 00:31:27.248 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:27.248 "is_configured": true, 00:31:27.248 "data_offset": 2048, 00:31:27.248 "data_size": 63488 00:31:27.248 }, 00:31:27.248 { 00:31:27.248 "name": "BaseBdev3", 00:31:27.248 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:27.248 "is_configured": true, 00:31:27.248 "data_offset": 2048, 00:31:27.248 "data_size": 63488 00:31:27.248 } 00:31:27.248 ] 00:31:27.248 }' 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.248 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:27.507 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b6215dcd-e44a-49f2-8d06-84a9694417db 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.766 [2024-11-26 17:27:57.703173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:27.766 NewBaseBdev 00:31:27.766 [2024-11-26 17:27:57.703680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:27.766 [2024-11-26 17:27:57.703710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:27.766 [2024-11-26 17:27:57.703994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.766 [2024-11-26 17:27:57.709452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:27.766 [2024-11-26 17:27:57.709474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:27.766 [2024-11-26 17:27:57.709773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.766 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.767 [ 00:31:27.767 { 00:31:27.767 "name": "NewBaseBdev", 00:31:27.767 "aliases": [ 00:31:27.767 "b6215dcd-e44a-49f2-8d06-84a9694417db" 00:31:27.767 ], 00:31:27.767 "product_name": "Malloc disk", 00:31:27.767 "block_size": 512, 00:31:27.767 "num_blocks": 65536, 00:31:27.767 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:27.767 "assigned_rate_limits": { 00:31:27.767 "rw_ios_per_sec": 0, 00:31:27.767 "rw_mbytes_per_sec": 0, 00:31:27.767 "r_mbytes_per_sec": 0, 00:31:27.767 "w_mbytes_per_sec": 0 00:31:27.767 }, 00:31:27.767 "claimed": true, 00:31:27.767 "claim_type": "exclusive_write", 00:31:27.767 "zoned": false, 00:31:27.767 "supported_io_types": { 00:31:27.767 "read": true, 00:31:27.767 "write": true, 00:31:27.767 "unmap": true, 00:31:27.767 "flush": true, 00:31:27.767 "reset": true, 00:31:27.767 "nvme_admin": false, 00:31:27.767 "nvme_io": false, 00:31:27.767 "nvme_io_md": false, 00:31:27.767 "write_zeroes": true, 00:31:27.767 "zcopy": true, 00:31:27.767 "get_zone_info": false, 00:31:27.767 "zone_management": false, 00:31:27.767 "zone_append": false, 00:31:27.767 "compare": false, 00:31:27.767 "compare_and_write": false, 00:31:27.767 "abort": true, 00:31:27.767 "seek_hole": false, 00:31:27.767 "seek_data": false, 00:31:27.767 "copy": true, 00:31:27.767 "nvme_iov_md": false 00:31:27.767 }, 00:31:27.767 "memory_domains": [ 00:31:27.767 { 00:31:27.767 "dma_device_id": "system", 00:31:27.767 "dma_device_type": 1 00:31:27.767 }, 00:31:27.767 { 00:31:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.767 "dma_device_type": 2 00:31:27.767 } 00:31:27.767 ], 00:31:27.767 "driver_specific": {} 00:31:27.767 } 00:31:27.767 ] 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.767 "name": "Existed_Raid", 00:31:27.767 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:27.767 "strip_size_kb": 64, 00:31:27.767 "state": "online", 00:31:27.767 "raid_level": "raid5f", 00:31:27.767 "superblock": true, 00:31:27.767 "num_base_bdevs": 3, 00:31:27.767 "num_base_bdevs_discovered": 3, 00:31:27.767 "num_base_bdevs_operational": 3, 00:31:27.767 "base_bdevs_list": [ 00:31:27.767 { 00:31:27.767 "name": "NewBaseBdev", 00:31:27.767 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:27.767 "is_configured": true, 00:31:27.767 "data_offset": 2048, 00:31:27.767 "data_size": 63488 00:31:27.767 }, 00:31:27.767 { 00:31:27.767 "name": "BaseBdev2", 00:31:27.767 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:27.767 "is_configured": true, 00:31:27.767 "data_offset": 2048, 00:31:27.767 "data_size": 63488 00:31:27.767 }, 00:31:27.767 { 00:31:27.767 "name": "BaseBdev3", 00:31:27.767 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:27.767 "is_configured": true, 00:31:27.767 "data_offset": 2048, 00:31:27.767 "data_size": 63488 00:31:27.767 } 00:31:27.767 ] 00:31:27.767 }' 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.767 17:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.335 [2024-11-26 17:27:58.203830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.335 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.335 "name": "Existed_Raid", 00:31:28.335 "aliases": [ 00:31:28.335 "8606b2a8-348e-4b59-91db-61502275beae" 00:31:28.335 ], 00:31:28.335 "product_name": "Raid Volume", 00:31:28.335 "block_size": 512, 00:31:28.335 "num_blocks": 126976, 00:31:28.335 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:28.335 "assigned_rate_limits": { 00:31:28.335 "rw_ios_per_sec": 0, 00:31:28.335 "rw_mbytes_per_sec": 0, 00:31:28.335 "r_mbytes_per_sec": 0, 00:31:28.335 "w_mbytes_per_sec": 0 00:31:28.335 }, 00:31:28.335 "claimed": false, 00:31:28.335 "zoned": false, 00:31:28.335 "supported_io_types": { 00:31:28.335 "read": true, 00:31:28.335 "write": true, 00:31:28.335 "unmap": false, 00:31:28.335 "flush": false, 00:31:28.335 "reset": true, 00:31:28.335 "nvme_admin": false, 00:31:28.335 "nvme_io": false, 00:31:28.335 "nvme_io_md": false, 00:31:28.335 "write_zeroes": true, 00:31:28.335 "zcopy": false, 00:31:28.335 "get_zone_info": false, 00:31:28.335 "zone_management": false, 00:31:28.336 "zone_append": false, 00:31:28.336 "compare": false, 00:31:28.336 "compare_and_write": false, 00:31:28.336 "abort": false, 00:31:28.336 "seek_hole": false, 00:31:28.336 "seek_data": false, 00:31:28.336 "copy": false, 00:31:28.336 "nvme_iov_md": false 00:31:28.336 }, 00:31:28.336 "driver_specific": { 00:31:28.336 "raid": { 00:31:28.336 "uuid": "8606b2a8-348e-4b59-91db-61502275beae", 00:31:28.336 "strip_size_kb": 64, 00:31:28.336 "state": "online", 00:31:28.336 "raid_level": "raid5f", 00:31:28.336 "superblock": true, 00:31:28.336 "num_base_bdevs": 3, 00:31:28.336 "num_base_bdevs_discovered": 3, 00:31:28.336 "num_base_bdevs_operational": 3, 00:31:28.336 "base_bdevs_list": [ 00:31:28.336 { 00:31:28.336 "name": "NewBaseBdev", 00:31:28.336 "uuid": "b6215dcd-e44a-49f2-8d06-84a9694417db", 00:31:28.336 "is_configured": true, 00:31:28.336 "data_offset": 2048, 00:31:28.336 "data_size": 63488 00:31:28.336 }, 00:31:28.336 { 00:31:28.336 "name": "BaseBdev2", 00:31:28.336 "uuid": "a5e1fe67-b64b-44af-8c82-b28ac4248bf3", 00:31:28.336 "is_configured": true, 00:31:28.336 "data_offset": 2048, 00:31:28.336 "data_size": 63488 00:31:28.336 }, 00:31:28.336 { 00:31:28.336 "name": "BaseBdev3", 00:31:28.336 "uuid": "37538f20-987a-4486-b53c-982ad50e50a0", 00:31:28.336 "is_configured": true, 00:31:28.336 "data_offset": 2048, 00:31:28.336 "data_size": 63488 00:31:28.336 } 00:31:28.336 ] 00:31:28.336 } 00:31:28.336 } 00:31:28.336 }' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:28.336 BaseBdev2 00:31:28.336 BaseBdev3' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.336 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.595 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.596 [2024-11-26 17:27:58.491205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:28.596 [2024-11-26 17:27:58.491244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:28.596 [2024-11-26 17:27:58.491350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:28.596 [2024-11-26 17:27:58.491671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:28.596 [2024-11-26 17:27:58.491692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80669 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80669 ']' 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80669 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80669 00:31:28.596 killing process with pid 80669 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80669' 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80669 00:31:28.596 [2024-11-26 17:27:58.548311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:28.596 17:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80669 00:31:28.854 [2024-11-26 17:27:58.861325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:30.253 ************************************ 00:31:30.253 END TEST raid5f_state_function_test_sb 00:31:30.253 ************************************ 00:31:30.253 17:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:30.253 00:31:30.253 real 0m10.900s 00:31:30.253 user 0m17.147s 00:31:30.253 sys 0m2.400s 00:31:30.253 17:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.253 17:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.253 17:28:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:30.253 17:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:30.253 17:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.253 17:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:30.253 ************************************ 00:31:30.253 START TEST raid5f_superblock_test 00:31:30.253 ************************************ 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:30.253 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81290 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81290 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81290 ']' 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.254 17:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.254 [2024-11-26 17:28:00.251824] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:31:30.254 [2024-11-26 17:28:00.252194] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81290 ] 00:31:30.522 [2024-11-26 17:28:00.437465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.522 [2024-11-26 17:28:00.586416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.781 [2024-11-26 17:28:00.814789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:30.781 [2024-11-26 17:28:00.814865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.041 malloc1 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.041 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.300 [2024-11-26 17:28:01.156703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:31.300 [2024-11-26 17:28:01.156792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.300 [2024-11-26 17:28:01.156823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:31.300 [2024-11-26 17:28:01.156836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.300 [2024-11-26 17:28:01.159544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.300 [2024-11-26 17:28:01.159584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:31.300 pt1 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.300 malloc2 00:31:31.300 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.301 [2024-11-26 17:28:01.216277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:31.301 [2024-11-26 17:28:01.216492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.301 [2024-11-26 17:28:01.216580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:31.301 [2024-11-26 17:28:01.216673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.301 [2024-11-26 17:28:01.219512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.301 [2024-11-26 17:28:01.219679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:31.301 pt2 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.301 malloc3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.301 [2024-11-26 17:28:01.293250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:31.301 [2024-11-26 17:28:01.293439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.301 [2024-11-26 17:28:01.293500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:31.301 [2024-11-26 17:28:01.293624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.301 [2024-11-26 17:28:01.296195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.301 [2024-11-26 17:28:01.296346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:31.301 pt3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.301 [2024-11-26 17:28:01.305285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:31.301 [2024-11-26 17:28:01.307614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:31.301 [2024-11-26 17:28:01.307788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:31.301 [2024-11-26 17:28:01.308006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:31.301 [2024-11-26 17:28:01.308204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:31.301 [2024-11-26 17:28:01.308477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:31.301 [2024-11-26 17:28:01.314034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:31.301 [2024-11-26 17:28:01.314157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:31.301 [2024-11-26 17:28:01.314438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.301 "name": "raid_bdev1", 00:31:31.301 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:31.301 "strip_size_kb": 64, 00:31:31.301 "state": "online", 00:31:31.301 "raid_level": "raid5f", 00:31:31.301 "superblock": true, 00:31:31.301 "num_base_bdevs": 3, 00:31:31.301 "num_base_bdevs_discovered": 3, 00:31:31.301 "num_base_bdevs_operational": 3, 00:31:31.301 "base_bdevs_list": [ 00:31:31.301 { 00:31:31.301 "name": "pt1", 00:31:31.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:31.301 "is_configured": true, 00:31:31.301 "data_offset": 2048, 00:31:31.301 "data_size": 63488 00:31:31.301 }, 00:31:31.301 { 00:31:31.301 "name": "pt2", 00:31:31.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:31.301 "is_configured": true, 00:31:31.301 "data_offset": 2048, 00:31:31.301 "data_size": 63488 00:31:31.301 }, 00:31:31.301 { 00:31:31.301 "name": "pt3", 00:31:31.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:31.301 "is_configured": true, 00:31:31.301 "data_offset": 2048, 00:31:31.301 "data_size": 63488 00:31:31.301 } 00:31:31.301 ] 00:31:31.301 }' 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.301 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 [2024-11-26 17:28:01.740929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:31.869 "name": "raid_bdev1", 00:31:31.869 "aliases": [ 00:31:31.869 "cf3a6def-e973-4b0b-b955-e508363b3599" 00:31:31.869 ], 00:31:31.869 "product_name": "Raid Volume", 00:31:31.869 "block_size": 512, 00:31:31.869 "num_blocks": 126976, 00:31:31.869 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:31.869 "assigned_rate_limits": { 00:31:31.869 "rw_ios_per_sec": 0, 00:31:31.869 "rw_mbytes_per_sec": 0, 00:31:31.869 "r_mbytes_per_sec": 0, 00:31:31.869 "w_mbytes_per_sec": 0 00:31:31.869 }, 00:31:31.869 "claimed": false, 00:31:31.869 "zoned": false, 00:31:31.869 "supported_io_types": { 00:31:31.869 "read": true, 00:31:31.869 "write": true, 00:31:31.869 "unmap": false, 00:31:31.869 "flush": false, 00:31:31.869 "reset": true, 00:31:31.869 "nvme_admin": false, 00:31:31.869 "nvme_io": false, 00:31:31.869 "nvme_io_md": false, 00:31:31.869 "write_zeroes": true, 00:31:31.869 "zcopy": false, 00:31:31.869 "get_zone_info": false, 00:31:31.869 "zone_management": false, 00:31:31.869 "zone_append": false, 00:31:31.869 "compare": false, 00:31:31.869 "compare_and_write": false, 00:31:31.869 "abort": false, 00:31:31.869 "seek_hole": false, 00:31:31.869 "seek_data": false, 00:31:31.869 "copy": false, 00:31:31.869 "nvme_iov_md": false 00:31:31.869 }, 00:31:31.869 "driver_specific": { 00:31:31.869 "raid": { 00:31:31.869 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:31.869 "strip_size_kb": 64, 00:31:31.869 "state": "online", 00:31:31.869 "raid_level": "raid5f", 00:31:31.869 "superblock": true, 00:31:31.869 "num_base_bdevs": 3, 00:31:31.869 "num_base_bdevs_discovered": 3, 00:31:31.869 "num_base_bdevs_operational": 3, 00:31:31.869 "base_bdevs_list": [ 00:31:31.869 { 00:31:31.869 "name": "pt1", 00:31:31.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:31.869 "is_configured": true, 00:31:31.869 "data_offset": 2048, 00:31:31.869 "data_size": 63488 00:31:31.869 }, 00:31:31.869 { 00:31:31.869 "name": "pt2", 00:31:31.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:31.869 "is_configured": true, 00:31:31.869 "data_offset": 2048, 00:31:31.869 "data_size": 63488 00:31:31.869 }, 00:31:31.869 { 00:31:31.869 "name": "pt3", 00:31:31.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:31.869 "is_configured": true, 00:31:31.869 "data_offset": 2048, 00:31:31.869 "data_size": 63488 00:31:31.869 } 00:31:31.869 ] 00:31:31.869 } 00:31:31.869 } 00:31:31.869 }' 00:31:31.869 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:31.870 pt2 00:31:31.870 pt3' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.870 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 17:28:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:32.129 [2024-11-26 17:28:02.016880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf3a6def-e973-4b0b-b955-e508363b3599 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf3a6def-e973-4b0b-b955-e508363b3599 ']' 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 [2024-11-26 17:28:02.064621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:32.129 [2024-11-26 17:28:02.064774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.129 [2024-11-26 17:28:02.064955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.129 [2024-11-26 17:28:02.065146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.129 [2024-11-26 17:28:02.065306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:32.129 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.130 [2024-11-26 17:28:02.216484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:32.130 [2024-11-26 17:28:02.219140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:32.130 [2024-11-26 17:28:02.219348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:32.130 [2024-11-26 17:28:02.219580] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:32.130 [2024-11-26 17:28:02.219781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:32.130 [2024-11-26 17:28:02.219928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:32.130 [2024-11-26 17:28:02.220079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:32.130 [2024-11-26 17:28:02.220122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:32.130 request: 00:31:32.130 { 00:31:32.130 "name": "raid_bdev1", 00:31:32.130 "raid_level": "raid5f", 00:31:32.130 "base_bdevs": [ 00:31:32.130 "malloc1", 00:31:32.130 "malloc2", 00:31:32.130 "malloc3" 00:31:32.130 ], 00:31:32.130 "strip_size_kb": 64, 00:31:32.130 "superblock": false, 00:31:32.130 "method": "bdev_raid_create", 00:31:32.130 "req_id": 1 00:31:32.130 } 00:31:32.130 Got JSON-RPC error response 00:31:32.130 response: 00:31:32.130 { 00:31:32.130 "code": -17, 00:31:32.130 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:32.130 } 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.130 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.389 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.389 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.390 [2024-11-26 17:28:02.288405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:32.390 [2024-11-26 17:28:02.288618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.390 [2024-11-26 17:28:02.288681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:32.390 [2024-11-26 17:28:02.288754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.390 [2024-11-26 17:28:02.291508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.390 [2024-11-26 17:28:02.291650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:32.390 [2024-11-26 17:28:02.291827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:32.390 [2024-11-26 17:28:02.291922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:32.390 pt1 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:32.390 "name": "raid_bdev1", 00:31:32.390 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:32.390 "strip_size_kb": 64, 00:31:32.390 "state": "configuring", 00:31:32.390 "raid_level": "raid5f", 00:31:32.390 "superblock": true, 00:31:32.390 "num_base_bdevs": 3, 00:31:32.390 "num_base_bdevs_discovered": 1, 00:31:32.390 "num_base_bdevs_operational": 3, 00:31:32.390 "base_bdevs_list": [ 00:31:32.390 { 00:31:32.390 "name": "pt1", 00:31:32.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:32.390 "is_configured": true, 00:31:32.390 "data_offset": 2048, 00:31:32.390 "data_size": 63488 00:31:32.390 }, 00:31:32.390 { 00:31:32.390 "name": null, 00:31:32.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:32.390 "is_configured": false, 00:31:32.390 "data_offset": 2048, 00:31:32.390 "data_size": 63488 00:31:32.390 }, 00:31:32.390 { 00:31:32.390 "name": null, 00:31:32.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:32.390 "is_configured": false, 00:31:32.390 "data_offset": 2048, 00:31:32.390 "data_size": 63488 00:31:32.390 } 00:31:32.390 ] 00:31:32.390 }' 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:32.390 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.649 [2024-11-26 17:28:02.735814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:32.649 [2024-11-26 17:28:02.735918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.649 [2024-11-26 17:28:02.735961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:32.649 [2024-11-26 17:28:02.735979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.649 [2024-11-26 17:28:02.736666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.649 [2024-11-26 17:28:02.736711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:32.649 [2024-11-26 17:28:02.736854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:32.649 [2024-11-26 17:28:02.736900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:32.649 pt2 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.649 [2024-11-26 17:28:02.747831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:32.649 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:32.908 "name": "raid_bdev1", 00:31:32.908 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:32.908 "strip_size_kb": 64, 00:31:32.908 "state": "configuring", 00:31:32.908 "raid_level": "raid5f", 00:31:32.908 "superblock": true, 00:31:32.908 "num_base_bdevs": 3, 00:31:32.908 "num_base_bdevs_discovered": 1, 00:31:32.908 "num_base_bdevs_operational": 3, 00:31:32.908 "base_bdevs_list": [ 00:31:32.908 { 00:31:32.908 "name": "pt1", 00:31:32.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:32.908 "is_configured": true, 00:31:32.908 "data_offset": 2048, 00:31:32.908 "data_size": 63488 00:31:32.908 }, 00:31:32.908 { 00:31:32.908 "name": null, 00:31:32.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:32.908 "is_configured": false, 00:31:32.908 "data_offset": 0, 00:31:32.908 "data_size": 63488 00:31:32.908 }, 00:31:32.908 { 00:31:32.908 "name": null, 00:31:32.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:32.908 "is_configured": false, 00:31:32.908 "data_offset": 2048, 00:31:32.908 "data_size": 63488 00:31:32.908 } 00:31:32.908 ] 00:31:32.908 }' 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:32.908 17:28:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.209 [2024-11-26 17:28:03.235050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:33.209 [2024-11-26 17:28:03.235272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.209 [2024-11-26 17:28:03.235333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:33.209 [2024-11-26 17:28:03.235424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.209 [2024-11-26 17:28:03.235981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.209 [2024-11-26 17:28:03.236007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:33.209 [2024-11-26 17:28:03.236102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:33.209 [2024-11-26 17:28:03.236129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:33.209 pt2 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.209 [2024-11-26 17:28:03.246995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:33.209 [2024-11-26 17:28:03.247052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.209 [2024-11-26 17:28:03.247071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:33.209 [2024-11-26 17:28:03.247085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.209 [2024-11-26 17:28:03.247498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.209 [2024-11-26 17:28:03.247540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:33.209 [2024-11-26 17:28:03.247618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:33.209 [2024-11-26 17:28:03.247644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:33.209 [2024-11-26 17:28:03.247793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:33.209 [2024-11-26 17:28:03.247808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:33.209 [2024-11-26 17:28:03.248073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:33.209 pt3 00:31:33.209 [2024-11-26 17:28:03.253735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:33.209 [2024-11-26 17:28:03.253756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:33.209 [2024-11-26 17:28:03.253956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:33.209 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.210 "name": "raid_bdev1", 00:31:33.210 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:33.210 "strip_size_kb": 64, 00:31:33.210 "state": "online", 00:31:33.210 "raid_level": "raid5f", 00:31:33.210 "superblock": true, 00:31:33.210 "num_base_bdevs": 3, 00:31:33.210 "num_base_bdevs_discovered": 3, 00:31:33.210 "num_base_bdevs_operational": 3, 00:31:33.210 "base_bdevs_list": [ 00:31:33.210 { 00:31:33.210 "name": "pt1", 00:31:33.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:33.210 "is_configured": true, 00:31:33.210 "data_offset": 2048, 00:31:33.210 "data_size": 63488 00:31:33.210 }, 00:31:33.210 { 00:31:33.210 "name": "pt2", 00:31:33.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:33.210 "is_configured": true, 00:31:33.210 "data_offset": 2048, 00:31:33.210 "data_size": 63488 00:31:33.210 }, 00:31:33.210 { 00:31:33.210 "name": "pt3", 00:31:33.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:33.210 "is_configured": true, 00:31:33.210 "data_offset": 2048, 00:31:33.210 "data_size": 63488 00:31:33.210 } 00:31:33.210 ] 00:31:33.210 }' 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.210 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.799 [2024-11-26 17:28:03.708189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.799 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.799 "name": "raid_bdev1", 00:31:33.799 "aliases": [ 00:31:33.799 "cf3a6def-e973-4b0b-b955-e508363b3599" 00:31:33.799 ], 00:31:33.799 "product_name": "Raid Volume", 00:31:33.799 "block_size": 512, 00:31:33.799 "num_blocks": 126976, 00:31:33.799 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:33.799 "assigned_rate_limits": { 00:31:33.799 "rw_ios_per_sec": 0, 00:31:33.799 "rw_mbytes_per_sec": 0, 00:31:33.799 "r_mbytes_per_sec": 0, 00:31:33.799 "w_mbytes_per_sec": 0 00:31:33.799 }, 00:31:33.799 "claimed": false, 00:31:33.799 "zoned": false, 00:31:33.799 "supported_io_types": { 00:31:33.799 "read": true, 00:31:33.799 "write": true, 00:31:33.799 "unmap": false, 00:31:33.799 "flush": false, 00:31:33.799 "reset": true, 00:31:33.799 "nvme_admin": false, 00:31:33.799 "nvme_io": false, 00:31:33.799 "nvme_io_md": false, 00:31:33.799 "write_zeroes": true, 00:31:33.799 "zcopy": false, 00:31:33.799 "get_zone_info": false, 00:31:33.799 "zone_management": false, 00:31:33.799 "zone_append": false, 00:31:33.799 "compare": false, 00:31:33.799 "compare_and_write": false, 00:31:33.799 "abort": false, 00:31:33.799 "seek_hole": false, 00:31:33.799 "seek_data": false, 00:31:33.799 "copy": false, 00:31:33.800 "nvme_iov_md": false 00:31:33.800 }, 00:31:33.800 "driver_specific": { 00:31:33.800 "raid": { 00:31:33.800 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:33.800 "strip_size_kb": 64, 00:31:33.800 "state": "online", 00:31:33.800 "raid_level": "raid5f", 00:31:33.800 "superblock": true, 00:31:33.800 "num_base_bdevs": 3, 00:31:33.800 "num_base_bdevs_discovered": 3, 00:31:33.800 "num_base_bdevs_operational": 3, 00:31:33.800 "base_bdevs_list": [ 00:31:33.800 { 00:31:33.800 "name": "pt1", 00:31:33.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:33.800 "is_configured": true, 00:31:33.800 "data_offset": 2048, 00:31:33.800 "data_size": 63488 00:31:33.800 }, 00:31:33.800 { 00:31:33.800 "name": "pt2", 00:31:33.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:33.800 "is_configured": true, 00:31:33.800 "data_offset": 2048, 00:31:33.800 "data_size": 63488 00:31:33.800 }, 00:31:33.800 { 00:31:33.800 "name": "pt3", 00:31:33.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:33.800 "is_configured": true, 00:31:33.800 "data_offset": 2048, 00:31:33.800 "data_size": 63488 00:31:33.800 } 00:31:33.800 ] 00:31:33.800 } 00:31:33.800 } 00:31:33.800 }' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:33.800 pt2 00:31:33.800 pt3' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:33.800 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 [2024-11-26 17:28:03.967770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:34.059 17:28:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf3a6def-e973-4b0b-b955-e508363b3599 '!=' cf3a6def-e973-4b0b-b955-e508363b3599 ']' 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 [2024-11-26 17:28:04.015569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.059 "name": "raid_bdev1", 00:31:34.059 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:34.059 "strip_size_kb": 64, 00:31:34.059 "state": "online", 00:31:34.059 "raid_level": "raid5f", 00:31:34.059 "superblock": true, 00:31:34.059 "num_base_bdevs": 3, 00:31:34.059 "num_base_bdevs_discovered": 2, 00:31:34.059 "num_base_bdevs_operational": 2, 00:31:34.059 "base_bdevs_list": [ 00:31:34.059 { 00:31:34.059 "name": null, 00:31:34.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.059 "is_configured": false, 00:31:34.059 "data_offset": 0, 00:31:34.059 "data_size": 63488 00:31:34.059 }, 00:31:34.059 { 00:31:34.059 "name": "pt2", 00:31:34.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:34.059 "is_configured": true, 00:31:34.059 "data_offset": 2048, 00:31:34.059 "data_size": 63488 00:31:34.059 }, 00:31:34.059 { 00:31:34.059 "name": "pt3", 00:31:34.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:34.059 "is_configured": true, 00:31:34.059 "data_offset": 2048, 00:31:34.059 "data_size": 63488 00:31:34.059 } 00:31:34.059 ] 00:31:34.059 }' 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.059 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 [2024-11-26 17:28:04.450877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:34.627 [2024-11-26 17:28:04.451048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:34.627 [2024-11-26 17:28:04.451174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:34.627 [2024-11-26 17:28:04.451241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:34.627 [2024-11-26 17:28:04.451262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.627 [2024-11-26 17:28:04.530713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:34.627 [2024-11-26 17:28:04.530901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.627 [2024-11-26 17:28:04.530958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:34.627 [2024-11-26 17:28:04.531085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.627 [2024-11-26 17:28:04.533835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.627 [2024-11-26 17:28:04.533980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:34.627 [2024-11-26 17:28:04.534094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:34.627 [2024-11-26 17:28:04.534152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:34.627 pt2 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.627 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.628 "name": "raid_bdev1", 00:31:34.628 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:34.628 "strip_size_kb": 64, 00:31:34.628 "state": "configuring", 00:31:34.628 "raid_level": "raid5f", 00:31:34.628 "superblock": true, 00:31:34.628 "num_base_bdevs": 3, 00:31:34.628 "num_base_bdevs_discovered": 1, 00:31:34.628 "num_base_bdevs_operational": 2, 00:31:34.628 "base_bdevs_list": [ 00:31:34.628 { 00:31:34.628 "name": null, 00:31:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.628 "is_configured": false, 00:31:34.628 "data_offset": 2048, 00:31:34.628 "data_size": 63488 00:31:34.628 }, 00:31:34.628 { 00:31:34.628 "name": "pt2", 00:31:34.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:34.628 "is_configured": true, 00:31:34.628 "data_offset": 2048, 00:31:34.628 "data_size": 63488 00:31:34.628 }, 00:31:34.628 { 00:31:34.628 "name": null, 00:31:34.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:34.628 "is_configured": false, 00:31:34.628 "data_offset": 2048, 00:31:34.628 "data_size": 63488 00:31:34.628 } 00:31:34.628 ] 00:31:34.628 }' 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.628 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.887 [2024-11-26 17:28:04.970187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:34.887 [2024-11-26 17:28:04.970404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.887 [2024-11-26 17:28:04.970513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:34.887 [2024-11-26 17:28:04.970628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.887 [2024-11-26 17:28:04.971232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.887 [2024-11-26 17:28:04.971365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:34.887 [2024-11-26 17:28:04.971558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:34.887 [2024-11-26 17:28:04.971679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:34.887 [2024-11-26 17:28:04.971848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:34.887 [2024-11-26 17:28:04.971958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:34.887 [2024-11-26 17:28:04.972285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:34.887 [2024-11-26 17:28:04.977597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:34.887 pt3 00:31:34.887 [2024-11-26 17:28:04.977705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:34.887 [2024-11-26 17:28:04.978061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.887 17:28:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.144 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.144 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.144 "name": "raid_bdev1", 00:31:35.144 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:35.144 "strip_size_kb": 64, 00:31:35.144 "state": "online", 00:31:35.144 "raid_level": "raid5f", 00:31:35.144 "superblock": true, 00:31:35.144 "num_base_bdevs": 3, 00:31:35.144 "num_base_bdevs_discovered": 2, 00:31:35.144 "num_base_bdevs_operational": 2, 00:31:35.144 "base_bdevs_list": [ 00:31:35.144 { 00:31:35.144 "name": null, 00:31:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.144 "is_configured": false, 00:31:35.144 "data_offset": 2048, 00:31:35.144 "data_size": 63488 00:31:35.144 }, 00:31:35.144 { 00:31:35.144 "name": "pt2", 00:31:35.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:35.144 "is_configured": true, 00:31:35.144 "data_offset": 2048, 00:31:35.144 "data_size": 63488 00:31:35.144 }, 00:31:35.144 { 00:31:35.144 "name": "pt3", 00:31:35.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:35.144 "is_configured": true, 00:31:35.144 "data_offset": 2048, 00:31:35.144 "data_size": 63488 00:31:35.144 } 00:31:35.144 ] 00:31:35.144 }' 00:31:35.144 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.144 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.401 [2024-11-26 17:28:05.352953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:35.401 [2024-11-26 17:28:05.352999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:35.401 [2024-11-26 17:28:05.353099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:35.401 [2024-11-26 17:28:05.353172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:35.401 [2024-11-26 17:28:05.353186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.401 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.401 [2024-11-26 17:28:05.420845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:35.401 [2024-11-26 17:28:05.421038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.401 [2024-11-26 17:28:05.421101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:35.401 [2024-11-26 17:28:05.421186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.401 [2024-11-26 17:28:05.424047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.401 [2024-11-26 17:28:05.424196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:35.401 [2024-11-26 17:28:05.424385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:35.401 pt1 00:31:35.401 [2024-11-26 17:28:05.424477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:35.401 [2024-11-26 17:28:05.424678] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:35.401 [2024-11-26 17:28:05.424695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:35.402 [2024-11-26 17:28:05.424713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:35.402 [2024-11-26 17:28:05.424779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.402 "name": "raid_bdev1", 00:31:35.402 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:35.402 "strip_size_kb": 64, 00:31:35.402 "state": "configuring", 00:31:35.402 "raid_level": "raid5f", 00:31:35.402 "superblock": true, 00:31:35.402 "num_base_bdevs": 3, 00:31:35.402 "num_base_bdevs_discovered": 1, 00:31:35.402 "num_base_bdevs_operational": 2, 00:31:35.402 "base_bdevs_list": [ 00:31:35.402 { 00:31:35.402 "name": null, 00:31:35.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.402 "is_configured": false, 00:31:35.402 "data_offset": 2048, 00:31:35.402 "data_size": 63488 00:31:35.402 }, 00:31:35.402 { 00:31:35.402 "name": "pt2", 00:31:35.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:35.402 "is_configured": true, 00:31:35.402 "data_offset": 2048, 00:31:35.402 "data_size": 63488 00:31:35.402 }, 00:31:35.402 { 00:31:35.402 "name": null, 00:31:35.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:35.402 "is_configured": false, 00:31:35.402 "data_offset": 2048, 00:31:35.402 "data_size": 63488 00:31:35.402 } 00:31:35.402 ] 00:31:35.402 }' 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.402 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.968 [2024-11-26 17:28:05.928268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:35.968 [2024-11-26 17:28:05.928504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.968 [2024-11-26 17:28:05.928563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:35.968 [2024-11-26 17:28:05.928580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.968 [2024-11-26 17:28:05.929199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.968 [2024-11-26 17:28:05.929221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:35.968 [2024-11-26 17:28:05.929325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:35.968 [2024-11-26 17:28:05.929354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:35.968 [2024-11-26 17:28:05.929511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:35.968 [2024-11-26 17:28:05.929522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:35.968 [2024-11-26 17:28:05.929877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:35.968 pt3 00:31:35.968 [2024-11-26 17:28:05.936249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:35.968 [2024-11-26 17:28:05.936280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:35.968 [2024-11-26 17:28:05.936598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.968 "name": "raid_bdev1", 00:31:35.968 "uuid": "cf3a6def-e973-4b0b-b955-e508363b3599", 00:31:35.968 "strip_size_kb": 64, 00:31:35.968 "state": "online", 00:31:35.968 "raid_level": "raid5f", 00:31:35.968 "superblock": true, 00:31:35.968 "num_base_bdevs": 3, 00:31:35.968 "num_base_bdevs_discovered": 2, 00:31:35.968 "num_base_bdevs_operational": 2, 00:31:35.968 "base_bdevs_list": [ 00:31:35.968 { 00:31:35.968 "name": null, 00:31:35.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.968 "is_configured": false, 00:31:35.968 "data_offset": 2048, 00:31:35.968 "data_size": 63488 00:31:35.968 }, 00:31:35.968 { 00:31:35.968 "name": "pt2", 00:31:35.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:35.968 "is_configured": true, 00:31:35.968 "data_offset": 2048, 00:31:35.968 "data_size": 63488 00:31:35.968 }, 00:31:35.968 { 00:31:35.968 "name": "pt3", 00:31:35.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:35.968 "is_configured": true, 00:31:35.968 "data_offset": 2048, 00:31:35.968 "data_size": 63488 00:31:35.968 } 00:31:35.968 ] 00:31:35.968 }' 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.968 17:28:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:36.226 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:36.226 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:36.226 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.226 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:36.484 [2024-11-26 17:28:06.383787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cf3a6def-e973-4b0b-b955-e508363b3599 '!=' cf3a6def-e973-4b0b-b955-e508363b3599 ']' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81290 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81290 ']' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81290 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81290 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:36.484 killing process with pid 81290 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81290' 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81290 00:31:36.484 [2024-11-26 17:28:06.458149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:36.484 17:28:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81290 00:31:36.484 [2024-11-26 17:28:06.458280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:36.484 [2024-11-26 17:28:06.458359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:36.484 [2024-11-26 17:28:06.458377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:36.743 [2024-11-26 17:28:06.785859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:38.120 17:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:38.120 00:31:38.120 real 0m7.870s 00:31:38.120 user 0m12.130s 00:31:38.120 sys 0m1.741s 00:31:38.120 ************************************ 00:31:38.120 END TEST raid5f_superblock_test 00:31:38.120 ************************************ 00:31:38.120 17:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.120 17:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.120 17:28:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:31:38.120 17:28:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:38.120 17:28:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:38.120 17:28:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.120 17:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:38.120 ************************************ 00:31:38.120 START TEST raid5f_rebuild_test 00:31:38.120 ************************************ 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:38.120 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81734 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81734 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81734 ']' 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.121 17:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.121 [2024-11-26 17:28:08.208602] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:31:38.121 [2024-11-26 17:28:08.208939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:31:38.121 Zero copy mechanism will not be used. 00:31:38.121 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81734 ] 00:31:38.380 [2024-11-26 17:28:08.396625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.639 [2024-11-26 17:28:08.549982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.897 [2024-11-26 17:28:08.781668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.897 [2024-11-26 17:28:08.781935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 BaseBdev1_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 [2024-11-26 17:28:09.125493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:39.157 [2024-11-26 17:28:09.125599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.157 [2024-11-26 17:28:09.125644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:39.157 [2024-11-26 17:28:09.125661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.157 [2024-11-26 17:28:09.128419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.157 [2024-11-26 17:28:09.128470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:39.157 BaseBdev1 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 BaseBdev2_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 [2024-11-26 17:28:09.185967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:39.157 [2024-11-26 17:28:09.186045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.157 [2024-11-26 17:28:09.186072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:39.157 [2024-11-26 17:28:09.186088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.157 [2024-11-26 17:28:09.188691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.157 [2024-11-26 17:28:09.188735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:39.157 BaseBdev2 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 BaseBdev3_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 [2024-11-26 17:28:09.254569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:39.157 [2024-11-26 17:28:09.254634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.157 [2024-11-26 17:28:09.254661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:39.157 [2024-11-26 17:28:09.254676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.157 [2024-11-26 17:28:09.257206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.157 [2024-11-26 17:28:09.257251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:39.157 BaseBdev3 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.157 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.417 spare_malloc 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.417 spare_delay 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.417 [2024-11-26 17:28:09.325539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:39.417 [2024-11-26 17:28:09.325612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.417 [2024-11-26 17:28:09.325634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:39.417 [2024-11-26 17:28:09.325648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.417 [2024-11-26 17:28:09.328187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.417 [2024-11-26 17:28:09.328233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:39.417 spare 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.417 [2024-11-26 17:28:09.337599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:39.417 [2024-11-26 17:28:09.339796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:39.417 [2024-11-26 17:28:09.339864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:39.417 [2024-11-26 17:28:09.339954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:39.417 [2024-11-26 17:28:09.339968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:39.417 [2024-11-26 17:28:09.340254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:39.417 [2024-11-26 17:28:09.346947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:39.417 [2024-11-26 17:28:09.347075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:39.417 [2024-11-26 17:28:09.347375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.417 "name": "raid_bdev1", 00:31:39.417 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:39.417 "strip_size_kb": 64, 00:31:39.417 "state": "online", 00:31:39.417 "raid_level": "raid5f", 00:31:39.417 "superblock": false, 00:31:39.417 "num_base_bdevs": 3, 00:31:39.417 "num_base_bdevs_discovered": 3, 00:31:39.417 "num_base_bdevs_operational": 3, 00:31:39.417 "base_bdevs_list": [ 00:31:39.417 { 00:31:39.417 "name": "BaseBdev1", 00:31:39.417 "uuid": "0205649a-a353-56cf-b989-818e1eb709d7", 00:31:39.417 "is_configured": true, 00:31:39.417 "data_offset": 0, 00:31:39.417 "data_size": 65536 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "BaseBdev2", 00:31:39.417 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:39.417 "is_configured": true, 00:31:39.417 "data_offset": 0, 00:31:39.417 "data_size": 65536 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "BaseBdev3", 00:31:39.417 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:39.417 "is_configured": true, 00:31:39.417 "data_offset": 0, 00:31:39.417 "data_size": 65536 00:31:39.417 } 00:31:39.417 ] 00:31:39.417 }' 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.417 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.983 [2024-11-26 17:28:09.806537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.983 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:39.984 17:28:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:40.242 [2024-11-26 17:28:10.141882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:40.242 /dev/nbd0 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:40.242 1+0 records in 00:31:40.242 1+0 records out 00:31:40.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272674 s, 15.0 MB/s 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:31:40.242 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:40.813 512+0 records in 00:31:40.813 512+0 records out 00:31:40.813 67108864 bytes (67 MB, 64 MiB) copied, 0.418886 s, 160 MB/s 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:40.813 [2024-11-26 17:28:10.869302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.813 [2024-11-26 17:28:10.893447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.813 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.073 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.073 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:41.073 "name": "raid_bdev1", 00:31:41.073 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:41.073 "strip_size_kb": 64, 00:31:41.073 "state": "online", 00:31:41.073 "raid_level": "raid5f", 00:31:41.073 "superblock": false, 00:31:41.073 "num_base_bdevs": 3, 00:31:41.073 "num_base_bdevs_discovered": 2, 00:31:41.073 "num_base_bdevs_operational": 2, 00:31:41.073 "base_bdevs_list": [ 00:31:41.073 { 00:31:41.073 "name": null, 00:31:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.073 "is_configured": false, 00:31:41.073 "data_offset": 0, 00:31:41.073 "data_size": 65536 00:31:41.073 }, 00:31:41.073 { 00:31:41.073 "name": "BaseBdev2", 00:31:41.073 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:41.073 "is_configured": true, 00:31:41.073 "data_offset": 0, 00:31:41.073 "data_size": 65536 00:31:41.073 }, 00:31:41.073 { 00:31:41.073 "name": "BaseBdev3", 00:31:41.073 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:41.073 "is_configured": true, 00:31:41.073 "data_offset": 0, 00:31:41.073 "data_size": 65536 00:31:41.073 } 00:31:41.073 ] 00:31:41.073 }' 00:31:41.073 17:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:41.073 17:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.333 17:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:41.333 17:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.333 17:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.333 [2024-11-26 17:28:11.376826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:41.333 [2024-11-26 17:28:11.396843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:31:41.333 17:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.333 17:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:41.333 [2024-11-26 17:28:11.406345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.712 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:42.713 "name": "raid_bdev1", 00:31:42.713 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:42.713 "strip_size_kb": 64, 00:31:42.713 "state": "online", 00:31:42.713 "raid_level": "raid5f", 00:31:42.713 "superblock": false, 00:31:42.713 "num_base_bdevs": 3, 00:31:42.713 "num_base_bdevs_discovered": 3, 00:31:42.713 "num_base_bdevs_operational": 3, 00:31:42.713 "process": { 00:31:42.713 "type": "rebuild", 00:31:42.713 "target": "spare", 00:31:42.713 "progress": { 00:31:42.713 "blocks": 20480, 00:31:42.713 "percent": 15 00:31:42.713 } 00:31:42.713 }, 00:31:42.713 "base_bdevs_list": [ 00:31:42.713 { 00:31:42.713 "name": "spare", 00:31:42.713 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:42.713 "is_configured": true, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 }, 00:31:42.713 { 00:31:42.713 "name": "BaseBdev2", 00:31:42.713 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:42.713 "is_configured": true, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 }, 00:31:42.713 { 00:31:42.713 "name": "BaseBdev3", 00:31:42.713 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:42.713 "is_configured": true, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 } 00:31:42.713 ] 00:31:42.713 }' 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.713 [2024-11-26 17:28:12.523093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.713 [2024-11-26 17:28:12.619313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:42.713 [2024-11-26 17:28:12.619431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.713 [2024-11-26 17:28:12.619456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.713 [2024-11-26 17:28:12.619468] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.713 "name": "raid_bdev1", 00:31:42.713 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:42.713 "strip_size_kb": 64, 00:31:42.713 "state": "online", 00:31:42.713 "raid_level": "raid5f", 00:31:42.713 "superblock": false, 00:31:42.713 "num_base_bdevs": 3, 00:31:42.713 "num_base_bdevs_discovered": 2, 00:31:42.713 "num_base_bdevs_operational": 2, 00:31:42.713 "base_bdevs_list": [ 00:31:42.713 { 00:31:42.713 "name": null, 00:31:42.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.713 "is_configured": false, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 }, 00:31:42.713 { 00:31:42.713 "name": "BaseBdev2", 00:31:42.713 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:42.713 "is_configured": true, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 }, 00:31:42.713 { 00:31:42.713 "name": "BaseBdev3", 00:31:42.713 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:42.713 "is_configured": true, 00:31:42.713 "data_offset": 0, 00:31:42.713 "data_size": 65536 00:31:42.713 } 00:31:42.713 ] 00:31:42.713 }' 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.713 17:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:43.279 "name": "raid_bdev1", 00:31:43.279 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:43.279 "strip_size_kb": 64, 00:31:43.279 "state": "online", 00:31:43.279 "raid_level": "raid5f", 00:31:43.279 "superblock": false, 00:31:43.279 "num_base_bdevs": 3, 00:31:43.279 "num_base_bdevs_discovered": 2, 00:31:43.279 "num_base_bdevs_operational": 2, 00:31:43.279 "base_bdevs_list": [ 00:31:43.279 { 00:31:43.279 "name": null, 00:31:43.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.279 "is_configured": false, 00:31:43.279 "data_offset": 0, 00:31:43.279 "data_size": 65536 00:31:43.279 }, 00:31:43.279 { 00:31:43.279 "name": "BaseBdev2", 00:31:43.279 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:43.279 "is_configured": true, 00:31:43.279 "data_offset": 0, 00:31:43.279 "data_size": 65536 00:31:43.279 }, 00:31:43.279 { 00:31:43.279 "name": "BaseBdev3", 00:31:43.279 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:43.279 "is_configured": true, 00:31:43.279 "data_offset": 0, 00:31:43.279 "data_size": 65536 00:31:43.279 } 00:31:43.279 ] 00:31:43.279 }' 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.279 [2024-11-26 17:28:13.300038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:43.279 [2024-11-26 17:28:13.319384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.279 17:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:43.280 [2024-11-26 17:28:13.328854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:44.652 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.652 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:44.653 "name": "raid_bdev1", 00:31:44.653 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:44.653 "strip_size_kb": 64, 00:31:44.653 "state": "online", 00:31:44.653 "raid_level": "raid5f", 00:31:44.653 "superblock": false, 00:31:44.653 "num_base_bdevs": 3, 00:31:44.653 "num_base_bdevs_discovered": 3, 00:31:44.653 "num_base_bdevs_operational": 3, 00:31:44.653 "process": { 00:31:44.653 "type": "rebuild", 00:31:44.653 "target": "spare", 00:31:44.653 "progress": { 00:31:44.653 "blocks": 20480, 00:31:44.653 "percent": 15 00:31:44.653 } 00:31:44.653 }, 00:31:44.653 "base_bdevs_list": [ 00:31:44.653 { 00:31:44.653 "name": "spare", 00:31:44.653 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 }, 00:31:44.653 { 00:31:44.653 "name": "BaseBdev2", 00:31:44.653 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 }, 00:31:44.653 { 00:31:44.653 "name": "BaseBdev3", 00:31:44.653 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 } 00:31:44.653 ] 00:31:44.653 }' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:44.653 "name": "raid_bdev1", 00:31:44.653 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:44.653 "strip_size_kb": 64, 00:31:44.653 "state": "online", 00:31:44.653 "raid_level": "raid5f", 00:31:44.653 "superblock": false, 00:31:44.653 "num_base_bdevs": 3, 00:31:44.653 "num_base_bdevs_discovered": 3, 00:31:44.653 "num_base_bdevs_operational": 3, 00:31:44.653 "process": { 00:31:44.653 "type": "rebuild", 00:31:44.653 "target": "spare", 00:31:44.653 "progress": { 00:31:44.653 "blocks": 22528, 00:31:44.653 "percent": 17 00:31:44.653 } 00:31:44.653 }, 00:31:44.653 "base_bdevs_list": [ 00:31:44.653 { 00:31:44.653 "name": "spare", 00:31:44.653 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 }, 00:31:44.653 { 00:31:44.653 "name": "BaseBdev2", 00:31:44.653 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 }, 00:31:44.653 { 00:31:44.653 "name": "BaseBdev3", 00:31:44.653 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:44.653 "is_configured": true, 00:31:44.653 "data_offset": 0, 00:31:44.653 "data_size": 65536 00:31:44.653 } 00:31:44.653 ] 00:31:44.653 }' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:44.653 17:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:45.589 "name": "raid_bdev1", 00:31:45.589 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:45.589 "strip_size_kb": 64, 00:31:45.589 "state": "online", 00:31:45.589 "raid_level": "raid5f", 00:31:45.589 "superblock": false, 00:31:45.589 "num_base_bdevs": 3, 00:31:45.589 "num_base_bdevs_discovered": 3, 00:31:45.589 "num_base_bdevs_operational": 3, 00:31:45.589 "process": { 00:31:45.589 "type": "rebuild", 00:31:45.589 "target": "spare", 00:31:45.589 "progress": { 00:31:45.589 "blocks": 45056, 00:31:45.589 "percent": 34 00:31:45.589 } 00:31:45.589 }, 00:31:45.589 "base_bdevs_list": [ 00:31:45.589 { 00:31:45.589 "name": "spare", 00:31:45.589 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:45.589 "is_configured": true, 00:31:45.589 "data_offset": 0, 00:31:45.589 "data_size": 65536 00:31:45.589 }, 00:31:45.589 { 00:31:45.589 "name": "BaseBdev2", 00:31:45.589 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:45.589 "is_configured": true, 00:31:45.589 "data_offset": 0, 00:31:45.589 "data_size": 65536 00:31:45.589 }, 00:31:45.589 { 00:31:45.589 "name": "BaseBdev3", 00:31:45.589 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:45.589 "is_configured": true, 00:31:45.589 "data_offset": 0, 00:31:45.589 "data_size": 65536 00:31:45.589 } 00:31:45.589 ] 00:31:45.589 }' 00:31:45.589 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:45.847 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.847 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:45.847 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.847 17:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:46.782 "name": "raid_bdev1", 00:31:46.782 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:46.782 "strip_size_kb": 64, 00:31:46.782 "state": "online", 00:31:46.782 "raid_level": "raid5f", 00:31:46.782 "superblock": false, 00:31:46.782 "num_base_bdevs": 3, 00:31:46.782 "num_base_bdevs_discovered": 3, 00:31:46.782 "num_base_bdevs_operational": 3, 00:31:46.782 "process": { 00:31:46.782 "type": "rebuild", 00:31:46.782 "target": "spare", 00:31:46.782 "progress": { 00:31:46.782 "blocks": 67584, 00:31:46.782 "percent": 51 00:31:46.782 } 00:31:46.782 }, 00:31:46.782 "base_bdevs_list": [ 00:31:46.782 { 00:31:46.782 "name": "spare", 00:31:46.782 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:46.782 "is_configured": true, 00:31:46.782 "data_offset": 0, 00:31:46.782 "data_size": 65536 00:31:46.782 }, 00:31:46.782 { 00:31:46.782 "name": "BaseBdev2", 00:31:46.782 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:46.782 "is_configured": true, 00:31:46.782 "data_offset": 0, 00:31:46.782 "data_size": 65536 00:31:46.782 }, 00:31:46.782 { 00:31:46.782 "name": "BaseBdev3", 00:31:46.782 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:46.782 "is_configured": true, 00:31:46.782 "data_offset": 0, 00:31:46.782 "data_size": 65536 00:31:46.782 } 00:31:46.782 ] 00:31:46.782 }' 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:46.782 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:47.041 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.041 17:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.975 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:47.975 "name": "raid_bdev1", 00:31:47.975 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:47.975 "strip_size_kb": 64, 00:31:47.975 "state": "online", 00:31:47.975 "raid_level": "raid5f", 00:31:47.975 "superblock": false, 00:31:47.975 "num_base_bdevs": 3, 00:31:47.975 "num_base_bdevs_discovered": 3, 00:31:47.975 "num_base_bdevs_operational": 3, 00:31:47.975 "process": { 00:31:47.975 "type": "rebuild", 00:31:47.975 "target": "spare", 00:31:47.975 "progress": { 00:31:47.975 "blocks": 92160, 00:31:47.975 "percent": 70 00:31:47.975 } 00:31:47.975 }, 00:31:47.975 "base_bdevs_list": [ 00:31:47.975 { 00:31:47.975 "name": "spare", 00:31:47.975 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:47.975 "is_configured": true, 00:31:47.975 "data_offset": 0, 00:31:47.975 "data_size": 65536 00:31:47.975 }, 00:31:47.975 { 00:31:47.975 "name": "BaseBdev2", 00:31:47.975 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:47.975 "is_configured": true, 00:31:47.975 "data_offset": 0, 00:31:47.975 "data_size": 65536 00:31:47.976 }, 00:31:47.976 { 00:31:47.976 "name": "BaseBdev3", 00:31:47.976 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:47.976 "is_configured": true, 00:31:47.976 "data_offset": 0, 00:31:47.976 "data_size": 65536 00:31:47.976 } 00:31:47.976 ] 00:31:47.976 }' 00:31:47.976 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:47.976 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.976 17:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:47.976 17:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.976 17:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:49.353 "name": "raid_bdev1", 00:31:49.353 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:49.353 "strip_size_kb": 64, 00:31:49.353 "state": "online", 00:31:49.353 "raid_level": "raid5f", 00:31:49.353 "superblock": false, 00:31:49.353 "num_base_bdevs": 3, 00:31:49.353 "num_base_bdevs_discovered": 3, 00:31:49.353 "num_base_bdevs_operational": 3, 00:31:49.353 "process": { 00:31:49.353 "type": "rebuild", 00:31:49.353 "target": "spare", 00:31:49.353 "progress": { 00:31:49.353 "blocks": 114688, 00:31:49.353 "percent": 87 00:31:49.353 } 00:31:49.353 }, 00:31:49.353 "base_bdevs_list": [ 00:31:49.353 { 00:31:49.353 "name": "spare", 00:31:49.353 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:49.353 "is_configured": true, 00:31:49.353 "data_offset": 0, 00:31:49.353 "data_size": 65536 00:31:49.353 }, 00:31:49.353 { 00:31:49.353 "name": "BaseBdev2", 00:31:49.353 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:49.353 "is_configured": true, 00:31:49.353 "data_offset": 0, 00:31:49.353 "data_size": 65536 00:31:49.353 }, 00:31:49.353 { 00:31:49.353 "name": "BaseBdev3", 00:31:49.353 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:49.353 "is_configured": true, 00:31:49.353 "data_offset": 0, 00:31:49.353 "data_size": 65536 00:31:49.353 } 00:31:49.353 ] 00:31:49.353 }' 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:49.353 17:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:49.955 [2024-11-26 17:28:19.796170] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:49.955 [2024-11-26 17:28:19.796300] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:49.955 [2024-11-26 17:28:19.796363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:50.214 "name": "raid_bdev1", 00:31:50.214 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:50.214 "strip_size_kb": 64, 00:31:50.214 "state": "online", 00:31:50.214 "raid_level": "raid5f", 00:31:50.214 "superblock": false, 00:31:50.214 "num_base_bdevs": 3, 00:31:50.214 "num_base_bdevs_discovered": 3, 00:31:50.214 "num_base_bdevs_operational": 3, 00:31:50.214 "base_bdevs_list": [ 00:31:50.214 { 00:31:50.214 "name": "spare", 00:31:50.214 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:50.214 "is_configured": true, 00:31:50.214 "data_offset": 0, 00:31:50.214 "data_size": 65536 00:31:50.214 }, 00:31:50.214 { 00:31:50.214 "name": "BaseBdev2", 00:31:50.214 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:50.214 "is_configured": true, 00:31:50.214 "data_offset": 0, 00:31:50.214 "data_size": 65536 00:31:50.214 }, 00:31:50.214 { 00:31:50.214 "name": "BaseBdev3", 00:31:50.214 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:50.214 "is_configured": true, 00:31:50.214 "data_offset": 0, 00:31:50.214 "data_size": 65536 00:31:50.214 } 00:31:50.214 ] 00:31:50.214 }' 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:50.214 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:50.473 "name": "raid_bdev1", 00:31:50.473 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:50.473 "strip_size_kb": 64, 00:31:50.473 "state": "online", 00:31:50.473 "raid_level": "raid5f", 00:31:50.473 "superblock": false, 00:31:50.473 "num_base_bdevs": 3, 00:31:50.473 "num_base_bdevs_discovered": 3, 00:31:50.473 "num_base_bdevs_operational": 3, 00:31:50.473 "base_bdevs_list": [ 00:31:50.473 { 00:31:50.473 "name": "spare", 00:31:50.473 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:50.473 "is_configured": true, 00:31:50.473 "data_offset": 0, 00:31:50.473 "data_size": 65536 00:31:50.473 }, 00:31:50.473 { 00:31:50.473 "name": "BaseBdev2", 00:31:50.473 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:50.473 "is_configured": true, 00:31:50.473 "data_offset": 0, 00:31:50.473 "data_size": 65536 00:31:50.473 }, 00:31:50.473 { 00:31:50.473 "name": "BaseBdev3", 00:31:50.473 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:50.473 "is_configured": true, 00:31:50.473 "data_offset": 0, 00:31:50.473 "data_size": 65536 00:31:50.473 } 00:31:50.473 ] 00:31:50.473 }' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.473 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.473 "name": "raid_bdev1", 00:31:50.473 "uuid": "f2cdb2ae-c936-4ca8-8523-81015ef0115a", 00:31:50.473 "strip_size_kb": 64, 00:31:50.473 "state": "online", 00:31:50.473 "raid_level": "raid5f", 00:31:50.473 "superblock": false, 00:31:50.473 "num_base_bdevs": 3, 00:31:50.473 "num_base_bdevs_discovered": 3, 00:31:50.473 "num_base_bdevs_operational": 3, 00:31:50.473 "base_bdevs_list": [ 00:31:50.473 { 00:31:50.473 "name": "spare", 00:31:50.474 "uuid": "b1681eef-b39d-5980-8841-8e595035b315", 00:31:50.474 "is_configured": true, 00:31:50.474 "data_offset": 0, 00:31:50.474 "data_size": 65536 00:31:50.474 }, 00:31:50.474 { 00:31:50.474 "name": "BaseBdev2", 00:31:50.474 "uuid": "dd9745db-908c-5bf0-a88c-490fe3015975", 00:31:50.474 "is_configured": true, 00:31:50.474 "data_offset": 0, 00:31:50.474 "data_size": 65536 00:31:50.474 }, 00:31:50.474 { 00:31:50.474 "name": "BaseBdev3", 00:31:50.474 "uuid": "bb73958c-4e7d-56bb-974d-a24216a67729", 00:31:50.474 "is_configured": true, 00:31:50.474 "data_offset": 0, 00:31:50.474 "data_size": 65536 00:31:50.474 } 00:31:50.474 ] 00:31:50.474 }' 00:31:50.474 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.474 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.041 [2024-11-26 17:28:20.917978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:51.041 [2024-11-26 17:28:20.918018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:51.041 [2024-11-26 17:28:20.918132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:51.041 [2024-11-26 17:28:20.918226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:51.041 [2024-11-26 17:28:20.918248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:51.041 17:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:51.300 /dev/nbd0 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:51.300 1+0 records in 00:31:51.300 1+0 records out 00:31:51.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023947 s, 17.1 MB/s 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:51.300 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:51.559 /dev/nbd1 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:51.559 1+0 records in 00:31:51.559 1+0 records out 00:31:51.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390245 s, 10.5 MB/s 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:51.559 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.818 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:52.077 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.077 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:52.078 17:28:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81734 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81734 ']' 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81734 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.078 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81734 00:31:52.337 killing process with pid 81734 00:31:52.337 Received shutdown signal, test time was about 60.000000 seconds 00:31:52.337 00:31:52.337 Latency(us) 00:31:52.337 [2024-11-26T17:28:22.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.337 [2024-11-26T17:28:22.451Z] =================================================================================================================== 00:31:52.337 [2024-11-26T17:28:22.451Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:52.337 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.337 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.337 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81734' 00:31:52.337 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81734 00:31:52.337 [2024-11-26 17:28:22.227106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:52.337 17:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81734 00:31:52.596 [2024-11-26 17:28:22.644689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:31:53.974 00:31:53.974 real 0m15.746s 00:31:53.974 user 0m19.173s 00:31:53.974 sys 0m2.405s 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.974 ************************************ 00:31:53.974 END TEST raid5f_rebuild_test 00:31:53.974 ************************************ 00:31:53.974 17:28:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:31:53.974 17:28:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:53.974 17:28:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.974 17:28:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:53.974 ************************************ 00:31:53.974 START TEST raid5f_rebuild_test_sb 00:31:53.974 ************************************ 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:53.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82183 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82183 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82183 ']' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.974 17:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:53.974 Zero copy mechanism will not be used. 00:31:53.974 [2024-11-26 17:28:24.046322] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:31:53.975 [2024-11-26 17:28:24.046459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82183 ] 00:31:54.234 [2024-11-26 17:28:24.230876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.493 [2024-11-26 17:28:24.379256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.753 [2024-11-26 17:28:24.611833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:54.753 [2024-11-26 17:28:24.611878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.012 BaseBdev1_malloc 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.012 17:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.012 [2024-11-26 17:28:24.995792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:55.012 [2024-11-26 17:28:24.995866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.012 [2024-11-26 17:28:24.995893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:55.012 [2024-11-26 17:28:24.995908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.012 [2024-11-26 17:28:24.998467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.012 [2024-11-26 17:28:24.998681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:55.012 BaseBdev1 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.012 BaseBdev2_malloc 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.012 [2024-11-26 17:28:25.057094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:55.012 [2024-11-26 17:28:25.057306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.012 [2024-11-26 17:28:25.057345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:55.012 [2024-11-26 17:28:25.057362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.012 [2024-11-26 17:28:25.060015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.012 [2024-11-26 17:28:25.060061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:55.012 BaseBdev2 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.012 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.272 BaseBdev3_malloc 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.272 [2024-11-26 17:28:25.130258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:55.272 [2024-11-26 17:28:25.130480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.272 [2024-11-26 17:28:25.130517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:55.272 [2024-11-26 17:28:25.130550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.272 [2024-11-26 17:28:25.133111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.272 [2024-11-26 17:28:25.133159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:55.272 BaseBdev3 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.272 spare_malloc 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.272 spare_delay 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.272 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.273 [2024-11-26 17:28:25.198720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:55.273 [2024-11-26 17:28:25.198788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.273 [2024-11-26 17:28:25.198810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:55.273 [2024-11-26 17:28:25.198826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.273 [2024-11-26 17:28:25.201378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.273 [2024-11-26 17:28:25.201544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:55.273 spare 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.273 [2024-11-26 17:28:25.210811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:55.273 [2024-11-26 17:28:25.213159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:55.273 [2024-11-26 17:28:25.213355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:55.273 [2024-11-26 17:28:25.213639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:55.273 [2024-11-26 17:28:25.213693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:55.273 [2024-11-26 17:28:25.214095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:55.273 [2024-11-26 17:28:25.219959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:55.273 [2024-11-26 17:28:25.220085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:55.273 [2024-11-26 17:28:25.220400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.273 "name": "raid_bdev1", 00:31:55.273 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:55.273 "strip_size_kb": 64, 00:31:55.273 "state": "online", 00:31:55.273 "raid_level": "raid5f", 00:31:55.273 "superblock": true, 00:31:55.273 "num_base_bdevs": 3, 00:31:55.273 "num_base_bdevs_discovered": 3, 00:31:55.273 "num_base_bdevs_operational": 3, 00:31:55.273 "base_bdevs_list": [ 00:31:55.273 { 00:31:55.273 "name": "BaseBdev1", 00:31:55.273 "uuid": "45403100-30ed-583e-9b2d-fc4bc210a6c1", 00:31:55.273 "is_configured": true, 00:31:55.273 "data_offset": 2048, 00:31:55.273 "data_size": 63488 00:31:55.273 }, 00:31:55.273 { 00:31:55.273 "name": "BaseBdev2", 00:31:55.273 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:55.273 "is_configured": true, 00:31:55.273 "data_offset": 2048, 00:31:55.273 "data_size": 63488 00:31:55.273 }, 00:31:55.273 { 00:31:55.273 "name": "BaseBdev3", 00:31:55.273 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:55.273 "is_configured": true, 00:31:55.273 "data_offset": 2048, 00:31:55.273 "data_size": 63488 00:31:55.273 } 00:31:55.273 ] 00:31:55.273 }' 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.273 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.841 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:55.841 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:55.841 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.841 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.841 [2024-11-26 17:28:25.687268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:55.842 17:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:56.100 [2024-11-26 17:28:25.982802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:56.100 /dev/nbd0 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.100 1+0 records in 00:31:56.100 1+0 records out 00:31:56.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455434 s, 9.0 MB/s 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:31:56.100 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:31:56.668 496+0 records in 00:31:56.669 496+0 records out 00:31:56.669 65011712 bytes (65 MB, 62 MiB) copied, 0.447493 s, 145 MB/s 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:56.669 [2024-11-26 17:28:26.742190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.669 [2024-11-26 17:28:26.774072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.669 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.927 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.927 "name": "raid_bdev1", 00:31:56.927 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:56.927 "strip_size_kb": 64, 00:31:56.927 "state": "online", 00:31:56.927 "raid_level": "raid5f", 00:31:56.927 "superblock": true, 00:31:56.927 "num_base_bdevs": 3, 00:31:56.927 "num_base_bdevs_discovered": 2, 00:31:56.927 "num_base_bdevs_operational": 2, 00:31:56.927 "base_bdevs_list": [ 00:31:56.927 { 00:31:56.927 "name": null, 00:31:56.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.927 "is_configured": false, 00:31:56.927 "data_offset": 0, 00:31:56.927 "data_size": 63488 00:31:56.927 }, 00:31:56.928 { 00:31:56.928 "name": "BaseBdev2", 00:31:56.928 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:56.928 "is_configured": true, 00:31:56.928 "data_offset": 2048, 00:31:56.928 "data_size": 63488 00:31:56.928 }, 00:31:56.928 { 00:31:56.928 "name": "BaseBdev3", 00:31:56.928 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:56.928 "is_configured": true, 00:31:56.928 "data_offset": 2048, 00:31:56.928 "data_size": 63488 00:31:56.928 } 00:31:56.928 ] 00:31:56.928 }' 00:31:56.928 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.928 17:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 17:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:57.187 17:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.187 17:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 [2024-11-26 17:28:27.145825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:57.187 [2024-11-26 17:28:27.165294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:31:57.187 17:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.187 17:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:57.187 [2024-11-26 17:28:27.174127] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.163 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:58.163 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:58.163 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:58.164 "name": "raid_bdev1", 00:31:58.164 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:58.164 "strip_size_kb": 64, 00:31:58.164 "state": "online", 00:31:58.164 "raid_level": "raid5f", 00:31:58.164 "superblock": true, 00:31:58.164 "num_base_bdevs": 3, 00:31:58.164 "num_base_bdevs_discovered": 3, 00:31:58.164 "num_base_bdevs_operational": 3, 00:31:58.164 "process": { 00:31:58.164 "type": "rebuild", 00:31:58.164 "target": "spare", 00:31:58.164 "progress": { 00:31:58.164 "blocks": 20480, 00:31:58.164 "percent": 16 00:31:58.164 } 00:31:58.164 }, 00:31:58.164 "base_bdevs_list": [ 00:31:58.164 { 00:31:58.164 "name": "spare", 00:31:58.164 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:31:58.164 "is_configured": true, 00:31:58.164 "data_offset": 2048, 00:31:58.164 "data_size": 63488 00:31:58.164 }, 00:31:58.164 { 00:31:58.164 "name": "BaseBdev2", 00:31:58.164 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:58.164 "is_configured": true, 00:31:58.164 "data_offset": 2048, 00:31:58.164 "data_size": 63488 00:31:58.164 }, 00:31:58.164 { 00:31:58.164 "name": "BaseBdev3", 00:31:58.164 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:58.164 "is_configured": true, 00:31:58.164 "data_offset": 2048, 00:31:58.164 "data_size": 63488 00:31:58.164 } 00:31:58.164 ] 00:31:58.164 }' 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:58.164 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.424 [2024-11-26 17:28:28.317971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.424 [2024-11-26 17:28:28.385859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:58.424 [2024-11-26 17:28:28.386192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.424 [2024-11-26 17:28:28.386307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.424 [2024-11-26 17:28:28.386394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.424 "name": "raid_bdev1", 00:31:58.424 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:58.424 "strip_size_kb": 64, 00:31:58.424 "state": "online", 00:31:58.424 "raid_level": "raid5f", 00:31:58.424 "superblock": true, 00:31:58.424 "num_base_bdevs": 3, 00:31:58.424 "num_base_bdevs_discovered": 2, 00:31:58.424 "num_base_bdevs_operational": 2, 00:31:58.424 "base_bdevs_list": [ 00:31:58.424 { 00:31:58.424 "name": null, 00:31:58.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.424 "is_configured": false, 00:31:58.424 "data_offset": 0, 00:31:58.424 "data_size": 63488 00:31:58.424 }, 00:31:58.424 { 00:31:58.424 "name": "BaseBdev2", 00:31:58.424 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:58.424 "is_configured": true, 00:31:58.424 "data_offset": 2048, 00:31:58.424 "data_size": 63488 00:31:58.424 }, 00:31:58.424 { 00:31:58.424 "name": "BaseBdev3", 00:31:58.424 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:58.424 "is_configured": true, 00:31:58.424 "data_offset": 2048, 00:31:58.424 "data_size": 63488 00:31:58.424 } 00:31:58.424 ] 00:31:58.424 }' 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.424 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:58.992 "name": "raid_bdev1", 00:31:58.992 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:58.992 "strip_size_kb": 64, 00:31:58.992 "state": "online", 00:31:58.992 "raid_level": "raid5f", 00:31:58.992 "superblock": true, 00:31:58.992 "num_base_bdevs": 3, 00:31:58.992 "num_base_bdevs_discovered": 2, 00:31:58.992 "num_base_bdevs_operational": 2, 00:31:58.992 "base_bdevs_list": [ 00:31:58.992 { 00:31:58.992 "name": null, 00:31:58.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.992 "is_configured": false, 00:31:58.992 "data_offset": 0, 00:31:58.992 "data_size": 63488 00:31:58.992 }, 00:31:58.992 { 00:31:58.992 "name": "BaseBdev2", 00:31:58.992 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:58.992 "is_configured": true, 00:31:58.992 "data_offset": 2048, 00:31:58.992 "data_size": 63488 00:31:58.992 }, 00:31:58.992 { 00:31:58.992 "name": "BaseBdev3", 00:31:58.992 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:58.992 "is_configured": true, 00:31:58.992 "data_offset": 2048, 00:31:58.992 "data_size": 63488 00:31:58.992 } 00:31:58.992 ] 00:31:58.992 }' 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.992 [2024-11-26 17:28:28.962106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.992 [2024-11-26 17:28:28.980946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.992 17:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:58.992 [2024-11-26 17:28:28.990839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.928 17:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.928 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.928 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:59.928 "name": "raid_bdev1", 00:31:59.928 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:31:59.928 "strip_size_kb": 64, 00:31:59.928 "state": "online", 00:31:59.928 "raid_level": "raid5f", 00:31:59.928 "superblock": true, 00:31:59.928 "num_base_bdevs": 3, 00:31:59.928 "num_base_bdevs_discovered": 3, 00:31:59.928 "num_base_bdevs_operational": 3, 00:31:59.928 "process": { 00:31:59.928 "type": "rebuild", 00:31:59.928 "target": "spare", 00:31:59.928 "progress": { 00:31:59.928 "blocks": 20480, 00:31:59.928 "percent": 16 00:31:59.928 } 00:31:59.928 }, 00:31:59.928 "base_bdevs_list": [ 00:31:59.928 { 00:31:59.928 "name": "spare", 00:31:59.928 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:31:59.928 "is_configured": true, 00:31:59.928 "data_offset": 2048, 00:31:59.928 "data_size": 63488 00:31:59.928 }, 00:31:59.928 { 00:31:59.928 "name": "BaseBdev2", 00:31:59.928 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:31:59.928 "is_configured": true, 00:31:59.928 "data_offset": 2048, 00:31:59.928 "data_size": 63488 00:31:59.928 }, 00:31:59.928 { 00:31:59.928 "name": "BaseBdev3", 00:31:59.928 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:31:59.928 "is_configured": true, 00:31:59.928 "data_offset": 2048, 00:31:59.928 "data_size": 63488 00:31:59.928 } 00:31:59.928 ] 00:31:59.928 }' 00:32:00.186 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:00.187 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=576 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:00.187 "name": "raid_bdev1", 00:32:00.187 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:00.187 "strip_size_kb": 64, 00:32:00.187 "state": "online", 00:32:00.187 "raid_level": "raid5f", 00:32:00.187 "superblock": true, 00:32:00.187 "num_base_bdevs": 3, 00:32:00.187 "num_base_bdevs_discovered": 3, 00:32:00.187 "num_base_bdevs_operational": 3, 00:32:00.187 "process": { 00:32:00.187 "type": "rebuild", 00:32:00.187 "target": "spare", 00:32:00.187 "progress": { 00:32:00.187 "blocks": 22528, 00:32:00.187 "percent": 17 00:32:00.187 } 00:32:00.187 }, 00:32:00.187 "base_bdevs_list": [ 00:32:00.187 { 00:32:00.187 "name": "spare", 00:32:00.187 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:00.187 "is_configured": true, 00:32:00.187 "data_offset": 2048, 00:32:00.187 "data_size": 63488 00:32:00.187 }, 00:32:00.187 { 00:32:00.187 "name": "BaseBdev2", 00:32:00.187 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:00.187 "is_configured": true, 00:32:00.187 "data_offset": 2048, 00:32:00.187 "data_size": 63488 00:32:00.187 }, 00:32:00.187 { 00:32:00.187 "name": "BaseBdev3", 00:32:00.187 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:00.187 "is_configured": true, 00:32:00.187 "data_offset": 2048, 00:32:00.187 "data_size": 63488 00:32:00.187 } 00:32:00.187 ] 00:32:00.187 }' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:00.187 17:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.560 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:01.561 "name": "raid_bdev1", 00:32:01.561 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:01.561 "strip_size_kb": 64, 00:32:01.561 "state": "online", 00:32:01.561 "raid_level": "raid5f", 00:32:01.561 "superblock": true, 00:32:01.561 "num_base_bdevs": 3, 00:32:01.561 "num_base_bdevs_discovered": 3, 00:32:01.561 "num_base_bdevs_operational": 3, 00:32:01.561 "process": { 00:32:01.561 "type": "rebuild", 00:32:01.561 "target": "spare", 00:32:01.561 "progress": { 00:32:01.561 "blocks": 47104, 00:32:01.561 "percent": 37 00:32:01.561 } 00:32:01.561 }, 00:32:01.561 "base_bdevs_list": [ 00:32:01.561 { 00:32:01.561 "name": "spare", 00:32:01.561 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:01.561 "is_configured": true, 00:32:01.561 "data_offset": 2048, 00:32:01.561 "data_size": 63488 00:32:01.561 }, 00:32:01.561 { 00:32:01.561 "name": "BaseBdev2", 00:32:01.561 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:01.561 "is_configured": true, 00:32:01.561 "data_offset": 2048, 00:32:01.561 "data_size": 63488 00:32:01.561 }, 00:32:01.561 { 00:32:01.561 "name": "BaseBdev3", 00:32:01.561 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:01.561 "is_configured": true, 00:32:01.561 "data_offset": 2048, 00:32:01.561 "data_size": 63488 00:32:01.561 } 00:32:01.561 ] 00:32:01.561 }' 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.561 17:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:02.495 "name": "raid_bdev1", 00:32:02.495 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:02.495 "strip_size_kb": 64, 00:32:02.495 "state": "online", 00:32:02.495 "raid_level": "raid5f", 00:32:02.495 "superblock": true, 00:32:02.495 "num_base_bdevs": 3, 00:32:02.495 "num_base_bdevs_discovered": 3, 00:32:02.495 "num_base_bdevs_operational": 3, 00:32:02.495 "process": { 00:32:02.495 "type": "rebuild", 00:32:02.495 "target": "spare", 00:32:02.495 "progress": { 00:32:02.495 "blocks": 69632, 00:32:02.495 "percent": 54 00:32:02.495 } 00:32:02.495 }, 00:32:02.495 "base_bdevs_list": [ 00:32:02.495 { 00:32:02.495 "name": "spare", 00:32:02.495 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:02.495 "is_configured": true, 00:32:02.495 "data_offset": 2048, 00:32:02.495 "data_size": 63488 00:32:02.495 }, 00:32:02.495 { 00:32:02.495 "name": "BaseBdev2", 00:32:02.495 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:02.495 "is_configured": true, 00:32:02.495 "data_offset": 2048, 00:32:02.495 "data_size": 63488 00:32:02.495 }, 00:32:02.495 { 00:32:02.495 "name": "BaseBdev3", 00:32:02.495 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:02.495 "is_configured": true, 00:32:02.495 "data_offset": 2048, 00:32:02.495 "data_size": 63488 00:32:02.495 } 00:32:02.495 ] 00:32:02.495 }' 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:02.495 17:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.870 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:03.870 "name": "raid_bdev1", 00:32:03.870 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:03.870 "strip_size_kb": 64, 00:32:03.870 "state": "online", 00:32:03.870 "raid_level": "raid5f", 00:32:03.870 "superblock": true, 00:32:03.870 "num_base_bdevs": 3, 00:32:03.870 "num_base_bdevs_discovered": 3, 00:32:03.870 "num_base_bdevs_operational": 3, 00:32:03.870 "process": { 00:32:03.870 "type": "rebuild", 00:32:03.870 "target": "spare", 00:32:03.870 "progress": { 00:32:03.870 "blocks": 92160, 00:32:03.870 "percent": 72 00:32:03.870 } 00:32:03.870 }, 00:32:03.870 "base_bdevs_list": [ 00:32:03.870 { 00:32:03.870 "name": "spare", 00:32:03.870 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:03.871 "is_configured": true, 00:32:03.871 "data_offset": 2048, 00:32:03.871 "data_size": 63488 00:32:03.871 }, 00:32:03.871 { 00:32:03.871 "name": "BaseBdev2", 00:32:03.871 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:03.871 "is_configured": true, 00:32:03.871 "data_offset": 2048, 00:32:03.871 "data_size": 63488 00:32:03.871 }, 00:32:03.871 { 00:32:03.871 "name": "BaseBdev3", 00:32:03.871 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:03.871 "is_configured": true, 00:32:03.871 "data_offset": 2048, 00:32:03.871 "data_size": 63488 00:32:03.871 } 00:32:03.871 ] 00:32:03.871 }' 00:32:03.871 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:03.871 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:03.871 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:03.871 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:03.871 17:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:04.804 "name": "raid_bdev1", 00:32:04.804 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:04.804 "strip_size_kb": 64, 00:32:04.804 "state": "online", 00:32:04.804 "raid_level": "raid5f", 00:32:04.804 "superblock": true, 00:32:04.804 "num_base_bdevs": 3, 00:32:04.804 "num_base_bdevs_discovered": 3, 00:32:04.804 "num_base_bdevs_operational": 3, 00:32:04.804 "process": { 00:32:04.804 "type": "rebuild", 00:32:04.804 "target": "spare", 00:32:04.804 "progress": { 00:32:04.804 "blocks": 114688, 00:32:04.804 "percent": 90 00:32:04.804 } 00:32:04.804 }, 00:32:04.804 "base_bdevs_list": [ 00:32:04.804 { 00:32:04.804 "name": "spare", 00:32:04.804 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:04.804 "is_configured": true, 00:32:04.804 "data_offset": 2048, 00:32:04.804 "data_size": 63488 00:32:04.804 }, 00:32:04.804 { 00:32:04.804 "name": "BaseBdev2", 00:32:04.804 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:04.804 "is_configured": true, 00:32:04.804 "data_offset": 2048, 00:32:04.804 "data_size": 63488 00:32:04.804 }, 00:32:04.804 { 00:32:04.804 "name": "BaseBdev3", 00:32:04.804 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:04.804 "is_configured": true, 00:32:04.804 "data_offset": 2048, 00:32:04.804 "data_size": 63488 00:32:04.804 } 00:32:04.804 ] 00:32:04.804 }' 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.804 17:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:05.370 [2024-11-26 17:28:35.251052] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:05.370 [2024-11-26 17:28:35.251178] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:05.370 [2024-11-26 17:28:35.251343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.937 "name": "raid_bdev1", 00:32:05.937 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:05.937 "strip_size_kb": 64, 00:32:05.937 "state": "online", 00:32:05.937 "raid_level": "raid5f", 00:32:05.937 "superblock": true, 00:32:05.937 "num_base_bdevs": 3, 00:32:05.937 "num_base_bdevs_discovered": 3, 00:32:05.937 "num_base_bdevs_operational": 3, 00:32:05.937 "base_bdevs_list": [ 00:32:05.937 { 00:32:05.937 "name": "spare", 00:32:05.937 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:05.937 "is_configured": true, 00:32:05.937 "data_offset": 2048, 00:32:05.937 "data_size": 63488 00:32:05.937 }, 00:32:05.937 { 00:32:05.937 "name": "BaseBdev2", 00:32:05.937 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:05.937 "is_configured": true, 00:32:05.937 "data_offset": 2048, 00:32:05.937 "data_size": 63488 00:32:05.937 }, 00:32:05.937 { 00:32:05.937 "name": "BaseBdev3", 00:32:05.937 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:05.937 "is_configured": true, 00:32:05.937 "data_offset": 2048, 00:32:05.937 "data_size": 63488 00:32:05.937 } 00:32:05.937 ] 00:32:05.937 }' 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:05.937 17:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.937 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.243 "name": "raid_bdev1", 00:32:06.243 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:06.243 "strip_size_kb": 64, 00:32:06.243 "state": "online", 00:32:06.243 "raid_level": "raid5f", 00:32:06.243 "superblock": true, 00:32:06.243 "num_base_bdevs": 3, 00:32:06.243 "num_base_bdevs_discovered": 3, 00:32:06.243 "num_base_bdevs_operational": 3, 00:32:06.243 "base_bdevs_list": [ 00:32:06.243 { 00:32:06.243 "name": "spare", 00:32:06.243 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 }, 00:32:06.243 { 00:32:06.243 "name": "BaseBdev2", 00:32:06.243 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 }, 00:32:06.243 { 00:32:06.243 "name": "BaseBdev3", 00:32:06.243 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 } 00:32:06.243 ] 00:32:06.243 }' 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:06.243 "name": "raid_bdev1", 00:32:06.243 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:06.243 "strip_size_kb": 64, 00:32:06.243 "state": "online", 00:32:06.243 "raid_level": "raid5f", 00:32:06.243 "superblock": true, 00:32:06.243 "num_base_bdevs": 3, 00:32:06.243 "num_base_bdevs_discovered": 3, 00:32:06.243 "num_base_bdevs_operational": 3, 00:32:06.243 "base_bdevs_list": [ 00:32:06.243 { 00:32:06.243 "name": "spare", 00:32:06.243 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 }, 00:32:06.243 { 00:32:06.243 "name": "BaseBdev2", 00:32:06.243 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 }, 00:32:06.243 { 00:32:06.243 "name": "BaseBdev3", 00:32:06.243 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:06.243 "is_configured": true, 00:32:06.243 "data_offset": 2048, 00:32:06.243 "data_size": 63488 00:32:06.243 } 00:32:06.243 ] 00:32:06.243 }' 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:06.243 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.501 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:06.501 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.501 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.759 [2024-11-26 17:28:36.617777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:06.759 [2024-11-26 17:28:36.617980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:06.759 [2024-11-26 17:28:36.618126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:06.759 [2024-11-26 17:28:36.618229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:06.759 [2024-11-26 17:28:36.618269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:06.759 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:07.019 /dev/nbd0 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:07.019 1+0 records in 00:32:07.019 1+0 records out 00:32:07.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395047 s, 10.4 MB/s 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:07.019 17:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:07.277 /dev/nbd1 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:07.277 1+0 records in 00:32:07.277 1+0 records out 00:32:07.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054297 s, 7.5 MB/s 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:32:07.277 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:07.278 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:07.278 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.535 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.793 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.794 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.052 [2024-11-26 17:28:37.918181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:08.052 [2024-11-26 17:28:37.918259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.052 [2024-11-26 17:28:37.918288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:08.052 [2024-11-26 17:28:37.918304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.052 [2024-11-26 17:28:37.921103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.052 [2024-11-26 17:28:37.921151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:08.052 [2024-11-26 17:28:37.921253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:08.052 [2024-11-26 17:28:37.921312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:08.052 [2024-11-26 17:28:37.921472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:08.052 [2024-11-26 17:28:37.921602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:08.052 spare 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.052 17:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.052 [2024-11-26 17:28:38.021568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:08.052 [2024-11-26 17:28:38.021620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:08.052 [2024-11-26 17:28:38.022030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:32:08.052 [2024-11-26 17:28:38.028013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:08.052 [2024-11-26 17:28:38.028039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:08.052 [2024-11-26 17:28:38.028280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.052 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:08.052 "name": "raid_bdev1", 00:32:08.052 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:08.052 "strip_size_kb": 64, 00:32:08.052 "state": "online", 00:32:08.052 "raid_level": "raid5f", 00:32:08.052 "superblock": true, 00:32:08.052 "num_base_bdevs": 3, 00:32:08.052 "num_base_bdevs_discovered": 3, 00:32:08.052 "num_base_bdevs_operational": 3, 00:32:08.052 "base_bdevs_list": [ 00:32:08.052 { 00:32:08.052 "name": "spare", 00:32:08.052 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:08.052 "is_configured": true, 00:32:08.052 "data_offset": 2048, 00:32:08.052 "data_size": 63488 00:32:08.052 }, 00:32:08.052 { 00:32:08.052 "name": "BaseBdev2", 00:32:08.052 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:08.052 "is_configured": true, 00:32:08.052 "data_offset": 2048, 00:32:08.053 "data_size": 63488 00:32:08.053 }, 00:32:08.053 { 00:32:08.053 "name": "BaseBdev3", 00:32:08.053 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:08.053 "is_configured": true, 00:32:08.053 "data_offset": 2048, 00:32:08.053 "data_size": 63488 00:32:08.053 } 00:32:08.053 ] 00:32:08.053 }' 00:32:08.053 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:08.053 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:08.619 "name": "raid_bdev1", 00:32:08.619 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:08.619 "strip_size_kb": 64, 00:32:08.619 "state": "online", 00:32:08.619 "raid_level": "raid5f", 00:32:08.619 "superblock": true, 00:32:08.619 "num_base_bdevs": 3, 00:32:08.619 "num_base_bdevs_discovered": 3, 00:32:08.619 "num_base_bdevs_operational": 3, 00:32:08.619 "base_bdevs_list": [ 00:32:08.619 { 00:32:08.619 "name": "spare", 00:32:08.619 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:08.619 "is_configured": true, 00:32:08.619 "data_offset": 2048, 00:32:08.619 "data_size": 63488 00:32:08.619 }, 00:32:08.619 { 00:32:08.619 "name": "BaseBdev2", 00:32:08.619 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:08.619 "is_configured": true, 00:32:08.619 "data_offset": 2048, 00:32:08.619 "data_size": 63488 00:32:08.619 }, 00:32:08.619 { 00:32:08.619 "name": "BaseBdev3", 00:32:08.619 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:08.619 "is_configured": true, 00:32:08.619 "data_offset": 2048, 00:32:08.619 "data_size": 63488 00:32:08.619 } 00:32:08.619 ] 00:32:08.619 }' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.619 [2024-11-26 17:28:38.650730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:08.619 "name": "raid_bdev1", 00:32:08.619 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:08.619 "strip_size_kb": 64, 00:32:08.619 "state": "online", 00:32:08.619 "raid_level": "raid5f", 00:32:08.619 "superblock": true, 00:32:08.619 "num_base_bdevs": 3, 00:32:08.619 "num_base_bdevs_discovered": 2, 00:32:08.619 "num_base_bdevs_operational": 2, 00:32:08.619 "base_bdevs_list": [ 00:32:08.619 { 00:32:08.619 "name": null, 00:32:08.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.619 "is_configured": false, 00:32:08.619 "data_offset": 0, 00:32:08.619 "data_size": 63488 00:32:08.619 }, 00:32:08.619 { 00:32:08.619 "name": "BaseBdev2", 00:32:08.619 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:08.619 "is_configured": true, 00:32:08.619 "data_offset": 2048, 00:32:08.619 "data_size": 63488 00:32:08.619 }, 00:32:08.619 { 00:32:08.619 "name": "BaseBdev3", 00:32:08.619 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:08.619 "is_configured": true, 00:32:08.619 "data_offset": 2048, 00:32:08.619 "data_size": 63488 00:32:08.619 } 00:32:08.619 ] 00:32:08.619 }' 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:08.619 17:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.185 17:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:09.185 17:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.185 17:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.185 [2024-11-26 17:28:39.094158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:09.185 [2024-11-26 17:28:39.094386] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:09.185 [2024-11-26 17:28:39.094408] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:09.185 [2024-11-26 17:28:39.094457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:09.185 [2024-11-26 17:28:39.110718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:32:09.185 17:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.185 17:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:09.185 [2024-11-26 17:28:39.118821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.120 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:10.120 "name": "raid_bdev1", 00:32:10.120 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:10.120 "strip_size_kb": 64, 00:32:10.120 "state": "online", 00:32:10.120 "raid_level": "raid5f", 00:32:10.120 "superblock": true, 00:32:10.120 "num_base_bdevs": 3, 00:32:10.120 "num_base_bdevs_discovered": 3, 00:32:10.120 "num_base_bdevs_operational": 3, 00:32:10.120 "process": { 00:32:10.120 "type": "rebuild", 00:32:10.120 "target": "spare", 00:32:10.120 "progress": { 00:32:10.120 "blocks": 20480, 00:32:10.120 "percent": 16 00:32:10.120 } 00:32:10.120 }, 00:32:10.120 "base_bdevs_list": [ 00:32:10.120 { 00:32:10.121 "name": "spare", 00:32:10.121 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:10.121 "is_configured": true, 00:32:10.121 "data_offset": 2048, 00:32:10.121 "data_size": 63488 00:32:10.121 }, 00:32:10.121 { 00:32:10.121 "name": "BaseBdev2", 00:32:10.121 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:10.121 "is_configured": true, 00:32:10.121 "data_offset": 2048, 00:32:10.121 "data_size": 63488 00:32:10.121 }, 00:32:10.121 { 00:32:10.121 "name": "BaseBdev3", 00:32:10.121 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:10.121 "is_configured": true, 00:32:10.121 "data_offset": 2048, 00:32:10.121 "data_size": 63488 00:32:10.121 } 00:32:10.121 ] 00:32:10.121 }' 00:32:10.121 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:10.121 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:10.121 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.380 [2024-11-26 17:28:40.274470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:10.380 [2024-11-26 17:28:40.330557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:10.380 [2024-11-26 17:28:40.330690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.380 [2024-11-26 17:28:40.330712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:10.380 [2024-11-26 17:28:40.330725] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:10.380 "name": "raid_bdev1", 00:32:10.380 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:10.380 "strip_size_kb": 64, 00:32:10.380 "state": "online", 00:32:10.380 "raid_level": "raid5f", 00:32:10.380 "superblock": true, 00:32:10.380 "num_base_bdevs": 3, 00:32:10.380 "num_base_bdevs_discovered": 2, 00:32:10.380 "num_base_bdevs_operational": 2, 00:32:10.380 "base_bdevs_list": [ 00:32:10.380 { 00:32:10.380 "name": null, 00:32:10.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.380 "is_configured": false, 00:32:10.380 "data_offset": 0, 00:32:10.380 "data_size": 63488 00:32:10.380 }, 00:32:10.380 { 00:32:10.380 "name": "BaseBdev2", 00:32:10.380 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:10.380 "is_configured": true, 00:32:10.380 "data_offset": 2048, 00:32:10.380 "data_size": 63488 00:32:10.380 }, 00:32:10.380 { 00:32:10.380 "name": "BaseBdev3", 00:32:10.380 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:10.380 "is_configured": true, 00:32:10.380 "data_offset": 2048, 00:32:10.380 "data_size": 63488 00:32:10.380 } 00:32:10.380 ] 00:32:10.380 }' 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:10.380 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.983 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:10.983 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.983 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.983 [2024-11-26 17:28:40.825819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:10.983 [2024-11-26 17:28:40.825914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.983 [2024-11-26 17:28:40.825947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:32:10.983 [2024-11-26 17:28:40.825969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.983 [2024-11-26 17:28:40.826624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.983 [2024-11-26 17:28:40.826653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:10.983 [2024-11-26 17:28:40.826793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:10.983 [2024-11-26 17:28:40.826815] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:10.983 [2024-11-26 17:28:40.826829] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:10.983 [2024-11-26 17:28:40.826856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:10.983 [2024-11-26 17:28:40.844458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:32:10.983 spare 00:32:10.983 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.983 17:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:10.983 [2024-11-26 17:28:40.852619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.922 "name": "raid_bdev1", 00:32:11.922 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:11.922 "strip_size_kb": 64, 00:32:11.922 "state": "online", 00:32:11.922 "raid_level": "raid5f", 00:32:11.922 "superblock": true, 00:32:11.922 "num_base_bdevs": 3, 00:32:11.922 "num_base_bdevs_discovered": 3, 00:32:11.922 "num_base_bdevs_operational": 3, 00:32:11.922 "process": { 00:32:11.922 "type": "rebuild", 00:32:11.922 "target": "spare", 00:32:11.922 "progress": { 00:32:11.922 "blocks": 20480, 00:32:11.922 "percent": 16 00:32:11.922 } 00:32:11.922 }, 00:32:11.922 "base_bdevs_list": [ 00:32:11.922 { 00:32:11.922 "name": "spare", 00:32:11.922 "uuid": "17cc1946-a369-5302-b575-2a3338b83b79", 00:32:11.922 "is_configured": true, 00:32:11.922 "data_offset": 2048, 00:32:11.922 "data_size": 63488 00:32:11.922 }, 00:32:11.922 { 00:32:11.922 "name": "BaseBdev2", 00:32:11.922 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:11.922 "is_configured": true, 00:32:11.922 "data_offset": 2048, 00:32:11.922 "data_size": 63488 00:32:11.922 }, 00:32:11.922 { 00:32:11.922 "name": "BaseBdev3", 00:32:11.922 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:11.922 "is_configured": true, 00:32:11.922 "data_offset": 2048, 00:32:11.922 "data_size": 63488 00:32:11.922 } 00:32:11.922 ] 00:32:11.922 }' 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.922 17:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.922 [2024-11-26 17:28:42.000106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.181 [2024-11-26 17:28:42.064014] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:12.181 [2024-11-26 17:28:42.064114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.181 [2024-11-26 17:28:42.064138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.181 [2024-11-26 17:28:42.064148] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:12.181 "name": "raid_bdev1", 00:32:12.181 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:12.181 "strip_size_kb": 64, 00:32:12.181 "state": "online", 00:32:12.181 "raid_level": "raid5f", 00:32:12.181 "superblock": true, 00:32:12.181 "num_base_bdevs": 3, 00:32:12.181 "num_base_bdevs_discovered": 2, 00:32:12.181 "num_base_bdevs_operational": 2, 00:32:12.181 "base_bdevs_list": [ 00:32:12.181 { 00:32:12.181 "name": null, 00:32:12.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.181 "is_configured": false, 00:32:12.181 "data_offset": 0, 00:32:12.181 "data_size": 63488 00:32:12.181 }, 00:32:12.181 { 00:32:12.181 "name": "BaseBdev2", 00:32:12.181 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:12.181 "is_configured": true, 00:32:12.181 "data_offset": 2048, 00:32:12.181 "data_size": 63488 00:32:12.181 }, 00:32:12.181 { 00:32:12.181 "name": "BaseBdev3", 00:32:12.181 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:12.181 "is_configured": true, 00:32:12.181 "data_offset": 2048, 00:32:12.181 "data_size": 63488 00:32:12.181 } 00:32:12.181 ] 00:32:12.181 }' 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:12.181 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.440 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:12.440 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:12.440 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:12.440 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:12.440 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:12.700 "name": "raid_bdev1", 00:32:12.700 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:12.700 "strip_size_kb": 64, 00:32:12.700 "state": "online", 00:32:12.700 "raid_level": "raid5f", 00:32:12.700 "superblock": true, 00:32:12.700 "num_base_bdevs": 3, 00:32:12.700 "num_base_bdevs_discovered": 2, 00:32:12.700 "num_base_bdevs_operational": 2, 00:32:12.700 "base_bdevs_list": [ 00:32:12.700 { 00:32:12.700 "name": null, 00:32:12.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.700 "is_configured": false, 00:32:12.700 "data_offset": 0, 00:32:12.700 "data_size": 63488 00:32:12.700 }, 00:32:12.700 { 00:32:12.700 "name": "BaseBdev2", 00:32:12.700 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:12.700 "is_configured": true, 00:32:12.700 "data_offset": 2048, 00:32:12.700 "data_size": 63488 00:32:12.700 }, 00:32:12.700 { 00:32:12.700 "name": "BaseBdev3", 00:32:12.700 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:12.700 "is_configured": true, 00:32:12.700 "data_offset": 2048, 00:32:12.700 "data_size": 63488 00:32:12.700 } 00:32:12.700 ] 00:32:12.700 }' 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:12.700 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.701 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.701 [2024-11-26 17:28:42.712766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:12.701 [2024-11-26 17:28:42.712962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.701 [2024-11-26 17:28:42.713009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:32:12.701 [2024-11-26 17:28:42.713022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.701 [2024-11-26 17:28:42.713599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.701 [2024-11-26 17:28:42.713624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:12.701 [2024-11-26 17:28:42.713727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:12.701 [2024-11-26 17:28:42.713749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:12.701 [2024-11-26 17:28:42.713775] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:12.701 [2024-11-26 17:28:42.713789] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:12.701 BaseBdev1 00:32:12.701 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.701 17:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.642 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.902 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.902 "name": "raid_bdev1", 00:32:13.902 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:13.902 "strip_size_kb": 64, 00:32:13.902 "state": "online", 00:32:13.902 "raid_level": "raid5f", 00:32:13.902 "superblock": true, 00:32:13.902 "num_base_bdevs": 3, 00:32:13.902 "num_base_bdevs_discovered": 2, 00:32:13.902 "num_base_bdevs_operational": 2, 00:32:13.902 "base_bdevs_list": [ 00:32:13.902 { 00:32:13.902 "name": null, 00:32:13.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.902 "is_configured": false, 00:32:13.902 "data_offset": 0, 00:32:13.902 "data_size": 63488 00:32:13.902 }, 00:32:13.902 { 00:32:13.902 "name": "BaseBdev2", 00:32:13.902 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:13.902 "is_configured": true, 00:32:13.902 "data_offset": 2048, 00:32:13.902 "data_size": 63488 00:32:13.902 }, 00:32:13.902 { 00:32:13.902 "name": "BaseBdev3", 00:32:13.902 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:13.902 "is_configured": true, 00:32:13.902 "data_offset": 2048, 00:32:13.902 "data_size": 63488 00:32:13.902 } 00:32:13.902 ] 00:32:13.902 }' 00:32:13.902 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.902 17:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:14.161 "name": "raid_bdev1", 00:32:14.161 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:14.161 "strip_size_kb": 64, 00:32:14.161 "state": "online", 00:32:14.161 "raid_level": "raid5f", 00:32:14.161 "superblock": true, 00:32:14.161 "num_base_bdevs": 3, 00:32:14.161 "num_base_bdevs_discovered": 2, 00:32:14.161 "num_base_bdevs_operational": 2, 00:32:14.161 "base_bdevs_list": [ 00:32:14.161 { 00:32:14.161 "name": null, 00:32:14.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.161 "is_configured": false, 00:32:14.161 "data_offset": 0, 00:32:14.161 "data_size": 63488 00:32:14.161 }, 00:32:14.161 { 00:32:14.161 "name": "BaseBdev2", 00:32:14.161 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:14.161 "is_configured": true, 00:32:14.161 "data_offset": 2048, 00:32:14.161 "data_size": 63488 00:32:14.161 }, 00:32:14.161 { 00:32:14.161 "name": "BaseBdev3", 00:32:14.161 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:14.161 "is_configured": true, 00:32:14.161 "data_offset": 2048, 00:32:14.161 "data_size": 63488 00:32:14.161 } 00:32:14.161 ] 00:32:14.161 }' 00:32:14.161 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.421 [2024-11-26 17:28:44.354649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:14.421 [2024-11-26 17:28:44.354872] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:14.421 [2024-11-26 17:28:44.354892] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:14.421 request: 00:32:14.421 { 00:32:14.421 "base_bdev": "BaseBdev1", 00:32:14.421 "raid_bdev": "raid_bdev1", 00:32:14.421 "method": "bdev_raid_add_base_bdev", 00:32:14.421 "req_id": 1 00:32:14.421 } 00:32:14.421 Got JSON-RPC error response 00:32:14.421 response: 00:32:14.421 { 00:32:14.421 "code": -22, 00:32:14.421 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:14.421 } 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:14.421 17:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:15.358 "name": "raid_bdev1", 00:32:15.358 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:15.358 "strip_size_kb": 64, 00:32:15.358 "state": "online", 00:32:15.358 "raid_level": "raid5f", 00:32:15.358 "superblock": true, 00:32:15.358 "num_base_bdevs": 3, 00:32:15.358 "num_base_bdevs_discovered": 2, 00:32:15.358 "num_base_bdevs_operational": 2, 00:32:15.358 "base_bdevs_list": [ 00:32:15.358 { 00:32:15.358 "name": null, 00:32:15.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.358 "is_configured": false, 00:32:15.358 "data_offset": 0, 00:32:15.358 "data_size": 63488 00:32:15.358 }, 00:32:15.358 { 00:32:15.358 "name": "BaseBdev2", 00:32:15.358 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:15.358 "is_configured": true, 00:32:15.358 "data_offset": 2048, 00:32:15.358 "data_size": 63488 00:32:15.358 }, 00:32:15.358 { 00:32:15.358 "name": "BaseBdev3", 00:32:15.358 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:15.358 "is_configured": true, 00:32:15.358 "data_offset": 2048, 00:32:15.358 "data_size": 63488 00:32:15.358 } 00:32:15.358 ] 00:32:15.358 }' 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:15.358 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.926 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:15.926 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:15.927 "name": "raid_bdev1", 00:32:15.927 "uuid": "3df3b35d-04c9-4761-b2a9-22cbf66f7d10", 00:32:15.927 "strip_size_kb": 64, 00:32:15.927 "state": "online", 00:32:15.927 "raid_level": "raid5f", 00:32:15.927 "superblock": true, 00:32:15.927 "num_base_bdevs": 3, 00:32:15.927 "num_base_bdevs_discovered": 2, 00:32:15.927 "num_base_bdevs_operational": 2, 00:32:15.927 "base_bdevs_list": [ 00:32:15.927 { 00:32:15.927 "name": null, 00:32:15.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.927 "is_configured": false, 00:32:15.927 "data_offset": 0, 00:32:15.927 "data_size": 63488 00:32:15.927 }, 00:32:15.927 { 00:32:15.927 "name": "BaseBdev2", 00:32:15.927 "uuid": "cf8e5d46-8a7f-57d4-a602-5f7d7b37d149", 00:32:15.927 "is_configured": true, 00:32:15.927 "data_offset": 2048, 00:32:15.927 "data_size": 63488 00:32:15.927 }, 00:32:15.927 { 00:32:15.927 "name": "BaseBdev3", 00:32:15.927 "uuid": "7cda5292-a5fa-5d23-ada8-045b8c656b31", 00:32:15.927 "is_configured": true, 00:32:15.927 "data_offset": 2048, 00:32:15.927 "data_size": 63488 00:32:15.927 } 00:32:15.927 ] 00:32:15.927 }' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82183 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82183 ']' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82183 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82183 00:32:15.927 killing process with pid 82183 00:32:15.927 Received shutdown signal, test time was about 60.000000 seconds 00:32:15.927 00:32:15.927 Latency(us) 00:32:15.927 [2024-11-26T17:28:46.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.927 [2024-11-26T17:28:46.041Z] =================================================================================================================== 00:32:15.927 [2024-11-26T17:28:46.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82183' 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82183 00:32:15.927 [2024-11-26 17:28:45.952138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:15.927 17:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82183 00:32:15.927 [2024-11-26 17:28:45.952343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:15.927 [2024-11-26 17:28:45.952454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:15.927 [2024-11-26 17:28:45.952479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:16.495 [2024-11-26 17:28:46.379433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:17.876 17:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:32:17.876 00:32:17.876 real 0m23.668s 00:32:17.876 user 0m30.047s 00:32:17.876 sys 0m3.318s 00:32:17.876 ************************************ 00:32:17.876 END TEST raid5f_rebuild_test_sb 00:32:17.876 ************************************ 00:32:17.876 17:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.876 17:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:17.876 17:28:47 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:32:17.876 17:28:47 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:32:17.876 17:28:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:17.876 17:28:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.876 17:28:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:17.876 ************************************ 00:32:17.876 START TEST raid5f_state_function_test 00:32:17.876 ************************************ 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:17.876 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:17.877 Process raid pid: 82937 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82937 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82937' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82937 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82937 ']' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.877 17:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.877 [2024-11-26 17:28:47.804742] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:32:17.877 [2024-11-26 17:28:47.805070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.136 [2024-11-26 17:28:47.995215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.136 [2024-11-26 17:28:48.137693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.394 [2024-11-26 17:28:48.375017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:18.394 [2024-11-26 17:28:48.375059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.654 [2024-11-26 17:28:48.648097] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:18.654 [2024-11-26 17:28:48.648167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:18.654 [2024-11-26 17:28:48.648180] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:18.654 [2024-11-26 17:28:48.648194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:18.654 [2024-11-26 17:28:48.648202] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:18.654 [2024-11-26 17:28:48.648214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:18.654 [2024-11-26 17:28:48.648221] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:18.654 [2024-11-26 17:28:48.648234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.654 "name": "Existed_Raid", 00:32:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.654 "strip_size_kb": 64, 00:32:18.654 "state": "configuring", 00:32:18.654 "raid_level": "raid5f", 00:32:18.654 "superblock": false, 00:32:18.654 "num_base_bdevs": 4, 00:32:18.654 "num_base_bdevs_discovered": 0, 00:32:18.654 "num_base_bdevs_operational": 4, 00:32:18.654 "base_bdevs_list": [ 00:32:18.654 { 00:32:18.654 "name": "BaseBdev1", 00:32:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.654 "is_configured": false, 00:32:18.654 "data_offset": 0, 00:32:18.654 "data_size": 0 00:32:18.654 }, 00:32:18.654 { 00:32:18.654 "name": "BaseBdev2", 00:32:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.654 "is_configured": false, 00:32:18.654 "data_offset": 0, 00:32:18.654 "data_size": 0 00:32:18.654 }, 00:32:18.654 { 00:32:18.654 "name": "BaseBdev3", 00:32:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.654 "is_configured": false, 00:32:18.654 "data_offset": 0, 00:32:18.654 "data_size": 0 00:32:18.654 }, 00:32:18.654 { 00:32:18.654 "name": "BaseBdev4", 00:32:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.654 "is_configured": false, 00:32:18.654 "data_offset": 0, 00:32:18.654 "data_size": 0 00:32:18.654 } 00:32:18.654 ] 00:32:18.654 }' 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.654 17:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 [2024-11-26 17:28:49.083455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:19.252 [2024-11-26 17:28:49.083504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 [2024-11-26 17:28:49.091424] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:19.252 [2024-11-26 17:28:49.091475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:19.252 [2024-11-26 17:28:49.091488] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:19.252 [2024-11-26 17:28:49.091501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:19.252 [2024-11-26 17:28:49.091509] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:19.252 [2024-11-26 17:28:49.091536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:19.252 [2024-11-26 17:28:49.091544] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:19.252 [2024-11-26 17:28:49.091557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 [2024-11-26 17:28:49.141586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:19.252 BaseBdev1 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 [ 00:32:19.252 { 00:32:19.252 "name": "BaseBdev1", 00:32:19.252 "aliases": [ 00:32:19.252 "015919ea-9d19-42ab-8da6-580d52b87e5d" 00:32:19.252 ], 00:32:19.252 "product_name": "Malloc disk", 00:32:19.252 "block_size": 512, 00:32:19.252 "num_blocks": 65536, 00:32:19.252 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:19.252 "assigned_rate_limits": { 00:32:19.252 "rw_ios_per_sec": 0, 00:32:19.252 "rw_mbytes_per_sec": 0, 00:32:19.252 "r_mbytes_per_sec": 0, 00:32:19.252 "w_mbytes_per_sec": 0 00:32:19.252 }, 00:32:19.252 "claimed": true, 00:32:19.252 "claim_type": "exclusive_write", 00:32:19.252 "zoned": false, 00:32:19.252 "supported_io_types": { 00:32:19.252 "read": true, 00:32:19.252 "write": true, 00:32:19.252 "unmap": true, 00:32:19.252 "flush": true, 00:32:19.252 "reset": true, 00:32:19.252 "nvme_admin": false, 00:32:19.252 "nvme_io": false, 00:32:19.252 "nvme_io_md": false, 00:32:19.252 "write_zeroes": true, 00:32:19.252 "zcopy": true, 00:32:19.252 "get_zone_info": false, 00:32:19.252 "zone_management": false, 00:32:19.252 "zone_append": false, 00:32:19.252 "compare": false, 00:32:19.252 "compare_and_write": false, 00:32:19.252 "abort": true, 00:32:19.252 "seek_hole": false, 00:32:19.252 "seek_data": false, 00:32:19.252 "copy": true, 00:32:19.252 "nvme_iov_md": false 00:32:19.252 }, 00:32:19.252 "memory_domains": [ 00:32:19.252 { 00:32:19.252 "dma_device_id": "system", 00:32:19.252 "dma_device_type": 1 00:32:19.252 }, 00:32:19.252 { 00:32:19.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:19.252 "dma_device_type": 2 00:32:19.252 } 00:32:19.252 ], 00:32:19.252 "driver_specific": {} 00:32:19.252 } 00:32:19.252 ] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:19.252 "name": "Existed_Raid", 00:32:19.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.252 "strip_size_kb": 64, 00:32:19.252 "state": "configuring", 00:32:19.252 "raid_level": "raid5f", 00:32:19.252 "superblock": false, 00:32:19.252 "num_base_bdevs": 4, 00:32:19.252 "num_base_bdevs_discovered": 1, 00:32:19.252 "num_base_bdevs_operational": 4, 00:32:19.252 "base_bdevs_list": [ 00:32:19.252 { 00:32:19.252 "name": "BaseBdev1", 00:32:19.252 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:19.252 "is_configured": true, 00:32:19.252 "data_offset": 0, 00:32:19.252 "data_size": 65536 00:32:19.252 }, 00:32:19.252 { 00:32:19.252 "name": "BaseBdev2", 00:32:19.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.252 "is_configured": false, 00:32:19.252 "data_offset": 0, 00:32:19.252 "data_size": 0 00:32:19.252 }, 00:32:19.252 { 00:32:19.252 "name": "BaseBdev3", 00:32:19.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.252 "is_configured": false, 00:32:19.252 "data_offset": 0, 00:32:19.252 "data_size": 0 00:32:19.252 }, 00:32:19.252 { 00:32:19.252 "name": "BaseBdev4", 00:32:19.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.252 "is_configured": false, 00:32:19.252 "data_offset": 0, 00:32:19.252 "data_size": 0 00:32:19.252 } 00:32:19.252 ] 00:32:19.252 }' 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:19.252 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.512 [2024-11-26 17:28:49.604977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:19.512 [2024-11-26 17:28:49.605045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.512 [2024-11-26 17:28:49.617015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:19.512 [2024-11-26 17:28:49.619458] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:19.512 [2024-11-26 17:28:49.619624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:19.512 [2024-11-26 17:28:49.619721] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:19.512 [2024-11-26 17:28:49.619770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:19.512 [2024-11-26 17:28:49.619800] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:19.512 [2024-11-26 17:28:49.619833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:19.512 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.771 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:19.771 "name": "Existed_Raid", 00:32:19.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.772 "strip_size_kb": 64, 00:32:19.772 "state": "configuring", 00:32:19.772 "raid_level": "raid5f", 00:32:19.772 "superblock": false, 00:32:19.772 "num_base_bdevs": 4, 00:32:19.772 "num_base_bdevs_discovered": 1, 00:32:19.772 "num_base_bdevs_operational": 4, 00:32:19.772 "base_bdevs_list": [ 00:32:19.772 { 00:32:19.772 "name": "BaseBdev1", 00:32:19.772 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:19.772 "is_configured": true, 00:32:19.772 "data_offset": 0, 00:32:19.772 "data_size": 65536 00:32:19.772 }, 00:32:19.772 { 00:32:19.772 "name": "BaseBdev2", 00:32:19.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.772 "is_configured": false, 00:32:19.772 "data_offset": 0, 00:32:19.772 "data_size": 0 00:32:19.772 }, 00:32:19.772 { 00:32:19.772 "name": "BaseBdev3", 00:32:19.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.772 "is_configured": false, 00:32:19.772 "data_offset": 0, 00:32:19.772 "data_size": 0 00:32:19.772 }, 00:32:19.772 { 00:32:19.772 "name": "BaseBdev4", 00:32:19.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.772 "is_configured": false, 00:32:19.772 "data_offset": 0, 00:32:19.772 "data_size": 0 00:32:19.772 } 00:32:19.772 ] 00:32:19.772 }' 00:32:19.772 17:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:19.772 17:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.030 [2024-11-26 17:28:50.113403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:20.030 BaseBdev2 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.030 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.030 [ 00:32:20.030 { 00:32:20.030 "name": "BaseBdev2", 00:32:20.030 "aliases": [ 00:32:20.030 "a0462c0d-c975-41a7-9353-2d67bddc8d3b" 00:32:20.030 ], 00:32:20.030 "product_name": "Malloc disk", 00:32:20.030 "block_size": 512, 00:32:20.030 "num_blocks": 65536, 00:32:20.030 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:20.290 "assigned_rate_limits": { 00:32:20.290 "rw_ios_per_sec": 0, 00:32:20.290 "rw_mbytes_per_sec": 0, 00:32:20.290 "r_mbytes_per_sec": 0, 00:32:20.290 "w_mbytes_per_sec": 0 00:32:20.290 }, 00:32:20.290 "claimed": true, 00:32:20.290 "claim_type": "exclusive_write", 00:32:20.290 "zoned": false, 00:32:20.290 "supported_io_types": { 00:32:20.290 "read": true, 00:32:20.290 "write": true, 00:32:20.290 "unmap": true, 00:32:20.290 "flush": true, 00:32:20.290 "reset": true, 00:32:20.290 "nvme_admin": false, 00:32:20.290 "nvme_io": false, 00:32:20.290 "nvme_io_md": false, 00:32:20.290 "write_zeroes": true, 00:32:20.290 "zcopy": true, 00:32:20.290 "get_zone_info": false, 00:32:20.290 "zone_management": false, 00:32:20.290 "zone_append": false, 00:32:20.290 "compare": false, 00:32:20.290 "compare_and_write": false, 00:32:20.290 "abort": true, 00:32:20.290 "seek_hole": false, 00:32:20.290 "seek_data": false, 00:32:20.290 "copy": true, 00:32:20.290 "nvme_iov_md": false 00:32:20.290 }, 00:32:20.290 "memory_domains": [ 00:32:20.290 { 00:32:20.290 "dma_device_id": "system", 00:32:20.290 "dma_device_type": 1 00:32:20.290 }, 00:32:20.290 { 00:32:20.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.290 "dma_device_type": 2 00:32:20.290 } 00:32:20.290 ], 00:32:20.290 "driver_specific": {} 00:32:20.290 } 00:32:20.290 ] 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.290 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.290 "name": "Existed_Raid", 00:32:20.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.291 "strip_size_kb": 64, 00:32:20.291 "state": "configuring", 00:32:20.291 "raid_level": "raid5f", 00:32:20.291 "superblock": false, 00:32:20.291 "num_base_bdevs": 4, 00:32:20.291 "num_base_bdevs_discovered": 2, 00:32:20.291 "num_base_bdevs_operational": 4, 00:32:20.291 "base_bdevs_list": [ 00:32:20.291 { 00:32:20.291 "name": "BaseBdev1", 00:32:20.291 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:20.291 "is_configured": true, 00:32:20.291 "data_offset": 0, 00:32:20.291 "data_size": 65536 00:32:20.291 }, 00:32:20.291 { 00:32:20.291 "name": "BaseBdev2", 00:32:20.291 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:20.291 "is_configured": true, 00:32:20.291 "data_offset": 0, 00:32:20.291 "data_size": 65536 00:32:20.291 }, 00:32:20.291 { 00:32:20.291 "name": "BaseBdev3", 00:32:20.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.291 "is_configured": false, 00:32:20.291 "data_offset": 0, 00:32:20.291 "data_size": 0 00:32:20.291 }, 00:32:20.291 { 00:32:20.291 "name": "BaseBdev4", 00:32:20.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.291 "is_configured": false, 00:32:20.291 "data_offset": 0, 00:32:20.291 "data_size": 0 00:32:20.291 } 00:32:20.291 ] 00:32:20.291 }' 00:32:20.291 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.291 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 [2024-11-26 17:28:50.648499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:20.550 BaseBdev3 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:20.550 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.809 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.809 [ 00:32:20.809 { 00:32:20.809 "name": "BaseBdev3", 00:32:20.809 "aliases": [ 00:32:20.809 "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16" 00:32:20.809 ], 00:32:20.809 "product_name": "Malloc disk", 00:32:20.809 "block_size": 512, 00:32:20.809 "num_blocks": 65536, 00:32:20.809 "uuid": "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16", 00:32:20.809 "assigned_rate_limits": { 00:32:20.809 "rw_ios_per_sec": 0, 00:32:20.809 "rw_mbytes_per_sec": 0, 00:32:20.809 "r_mbytes_per_sec": 0, 00:32:20.809 "w_mbytes_per_sec": 0 00:32:20.809 }, 00:32:20.809 "claimed": true, 00:32:20.809 "claim_type": "exclusive_write", 00:32:20.809 "zoned": false, 00:32:20.809 "supported_io_types": { 00:32:20.809 "read": true, 00:32:20.809 "write": true, 00:32:20.809 "unmap": true, 00:32:20.809 "flush": true, 00:32:20.809 "reset": true, 00:32:20.809 "nvme_admin": false, 00:32:20.809 "nvme_io": false, 00:32:20.809 "nvme_io_md": false, 00:32:20.809 "write_zeroes": true, 00:32:20.809 "zcopy": true, 00:32:20.809 "get_zone_info": false, 00:32:20.810 "zone_management": false, 00:32:20.810 "zone_append": false, 00:32:20.810 "compare": false, 00:32:20.810 "compare_and_write": false, 00:32:20.810 "abort": true, 00:32:20.810 "seek_hole": false, 00:32:20.810 "seek_data": false, 00:32:20.810 "copy": true, 00:32:20.810 "nvme_iov_md": false 00:32:20.810 }, 00:32:20.810 "memory_domains": [ 00:32:20.810 { 00:32:20.810 "dma_device_id": "system", 00:32:20.810 "dma_device_type": 1 00:32:20.810 }, 00:32:20.810 { 00:32:20.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.810 "dma_device_type": 2 00:32:20.810 } 00:32:20.810 ], 00:32:20.810 "driver_specific": {} 00:32:20.810 } 00:32:20.810 ] 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.810 "name": "Existed_Raid", 00:32:20.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.810 "strip_size_kb": 64, 00:32:20.810 "state": "configuring", 00:32:20.810 "raid_level": "raid5f", 00:32:20.810 "superblock": false, 00:32:20.810 "num_base_bdevs": 4, 00:32:20.810 "num_base_bdevs_discovered": 3, 00:32:20.810 "num_base_bdevs_operational": 4, 00:32:20.810 "base_bdevs_list": [ 00:32:20.810 { 00:32:20.810 "name": "BaseBdev1", 00:32:20.810 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:20.810 "is_configured": true, 00:32:20.810 "data_offset": 0, 00:32:20.810 "data_size": 65536 00:32:20.810 }, 00:32:20.810 { 00:32:20.810 "name": "BaseBdev2", 00:32:20.810 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:20.810 "is_configured": true, 00:32:20.810 "data_offset": 0, 00:32:20.810 "data_size": 65536 00:32:20.810 }, 00:32:20.810 { 00:32:20.810 "name": "BaseBdev3", 00:32:20.810 "uuid": "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16", 00:32:20.810 "is_configured": true, 00:32:20.810 "data_offset": 0, 00:32:20.810 "data_size": 65536 00:32:20.810 }, 00:32:20.810 { 00:32:20.810 "name": "BaseBdev4", 00:32:20.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.810 "is_configured": false, 00:32:20.810 "data_offset": 0, 00:32:20.810 "data_size": 0 00:32:20.810 } 00:32:20.810 ] 00:32:20.810 }' 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.810 17:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.068 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:21.068 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.068 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.068 [2024-11-26 17:28:51.175486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:21.068 [2024-11-26 17:28:51.175600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:21.068 [2024-11-26 17:28:51.175614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:21.068 [2024-11-26 17:28:51.175952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:21.327 [2024-11-26 17:28:51.184274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:21.327 [2024-11-26 17:28:51.184306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:21.327 [2024-11-26 17:28:51.184691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.327 BaseBdev4 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 [ 00:32:21.327 { 00:32:21.327 "name": "BaseBdev4", 00:32:21.327 "aliases": [ 00:32:21.327 "2ff7f172-7a8b-45a7-8c3e-8e0f2fb1be36" 00:32:21.327 ], 00:32:21.327 "product_name": "Malloc disk", 00:32:21.327 "block_size": 512, 00:32:21.327 "num_blocks": 65536, 00:32:21.327 "uuid": "2ff7f172-7a8b-45a7-8c3e-8e0f2fb1be36", 00:32:21.327 "assigned_rate_limits": { 00:32:21.327 "rw_ios_per_sec": 0, 00:32:21.327 "rw_mbytes_per_sec": 0, 00:32:21.327 "r_mbytes_per_sec": 0, 00:32:21.327 "w_mbytes_per_sec": 0 00:32:21.327 }, 00:32:21.327 "claimed": true, 00:32:21.327 "claim_type": "exclusive_write", 00:32:21.327 "zoned": false, 00:32:21.327 "supported_io_types": { 00:32:21.327 "read": true, 00:32:21.327 "write": true, 00:32:21.327 "unmap": true, 00:32:21.327 "flush": true, 00:32:21.327 "reset": true, 00:32:21.327 "nvme_admin": false, 00:32:21.327 "nvme_io": false, 00:32:21.327 "nvme_io_md": false, 00:32:21.327 "write_zeroes": true, 00:32:21.327 "zcopy": true, 00:32:21.327 "get_zone_info": false, 00:32:21.327 "zone_management": false, 00:32:21.327 "zone_append": false, 00:32:21.327 "compare": false, 00:32:21.327 "compare_and_write": false, 00:32:21.327 "abort": true, 00:32:21.327 "seek_hole": false, 00:32:21.327 "seek_data": false, 00:32:21.327 "copy": true, 00:32:21.327 "nvme_iov_md": false 00:32:21.327 }, 00:32:21.327 "memory_domains": [ 00:32:21.327 { 00:32:21.327 "dma_device_id": "system", 00:32:21.327 "dma_device_type": 1 00:32:21.327 }, 00:32:21.327 { 00:32:21.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.327 "dma_device_type": 2 00:32:21.327 } 00:32:21.327 ], 00:32:21.327 "driver_specific": {} 00:32:21.327 } 00:32:21.327 ] 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:21.327 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.328 "name": "Existed_Raid", 00:32:21.328 "uuid": "c9969586-0dda-48ef-8a41-5394e15e4739", 00:32:21.328 "strip_size_kb": 64, 00:32:21.328 "state": "online", 00:32:21.328 "raid_level": "raid5f", 00:32:21.328 "superblock": false, 00:32:21.328 "num_base_bdevs": 4, 00:32:21.328 "num_base_bdevs_discovered": 4, 00:32:21.328 "num_base_bdevs_operational": 4, 00:32:21.328 "base_bdevs_list": [ 00:32:21.328 { 00:32:21.328 "name": "BaseBdev1", 00:32:21.328 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:21.328 "is_configured": true, 00:32:21.328 "data_offset": 0, 00:32:21.328 "data_size": 65536 00:32:21.328 }, 00:32:21.328 { 00:32:21.328 "name": "BaseBdev2", 00:32:21.328 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:21.328 "is_configured": true, 00:32:21.328 "data_offset": 0, 00:32:21.328 "data_size": 65536 00:32:21.328 }, 00:32:21.328 { 00:32:21.328 "name": "BaseBdev3", 00:32:21.328 "uuid": "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16", 00:32:21.328 "is_configured": true, 00:32:21.328 "data_offset": 0, 00:32:21.328 "data_size": 65536 00:32:21.328 }, 00:32:21.328 { 00:32:21.328 "name": "BaseBdev4", 00:32:21.328 "uuid": "2ff7f172-7a8b-45a7-8c3e-8e0f2fb1be36", 00:32:21.328 "is_configured": true, 00:32:21.328 "data_offset": 0, 00:32:21.328 "data_size": 65536 00:32:21.328 } 00:32:21.328 ] 00:32:21.328 }' 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.328 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.587 [2024-11-26 17:28:51.642004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.587 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:21.587 "name": "Existed_Raid", 00:32:21.587 "aliases": [ 00:32:21.587 "c9969586-0dda-48ef-8a41-5394e15e4739" 00:32:21.587 ], 00:32:21.587 "product_name": "Raid Volume", 00:32:21.587 "block_size": 512, 00:32:21.587 "num_blocks": 196608, 00:32:21.587 "uuid": "c9969586-0dda-48ef-8a41-5394e15e4739", 00:32:21.587 "assigned_rate_limits": { 00:32:21.587 "rw_ios_per_sec": 0, 00:32:21.587 "rw_mbytes_per_sec": 0, 00:32:21.587 "r_mbytes_per_sec": 0, 00:32:21.587 "w_mbytes_per_sec": 0 00:32:21.587 }, 00:32:21.587 "claimed": false, 00:32:21.587 "zoned": false, 00:32:21.587 "supported_io_types": { 00:32:21.587 "read": true, 00:32:21.587 "write": true, 00:32:21.587 "unmap": false, 00:32:21.587 "flush": false, 00:32:21.587 "reset": true, 00:32:21.587 "nvme_admin": false, 00:32:21.587 "nvme_io": false, 00:32:21.587 "nvme_io_md": false, 00:32:21.587 "write_zeroes": true, 00:32:21.587 "zcopy": false, 00:32:21.587 "get_zone_info": false, 00:32:21.587 "zone_management": false, 00:32:21.587 "zone_append": false, 00:32:21.587 "compare": false, 00:32:21.587 "compare_and_write": false, 00:32:21.587 "abort": false, 00:32:21.587 "seek_hole": false, 00:32:21.587 "seek_data": false, 00:32:21.587 "copy": false, 00:32:21.587 "nvme_iov_md": false 00:32:21.587 }, 00:32:21.587 "driver_specific": { 00:32:21.587 "raid": { 00:32:21.587 "uuid": "c9969586-0dda-48ef-8a41-5394e15e4739", 00:32:21.587 "strip_size_kb": 64, 00:32:21.587 "state": "online", 00:32:21.587 "raid_level": "raid5f", 00:32:21.587 "superblock": false, 00:32:21.587 "num_base_bdevs": 4, 00:32:21.587 "num_base_bdevs_discovered": 4, 00:32:21.587 "num_base_bdevs_operational": 4, 00:32:21.587 "base_bdevs_list": [ 00:32:21.587 { 00:32:21.587 "name": "BaseBdev1", 00:32:21.587 "uuid": "015919ea-9d19-42ab-8da6-580d52b87e5d", 00:32:21.587 "is_configured": true, 00:32:21.587 "data_offset": 0, 00:32:21.587 "data_size": 65536 00:32:21.587 }, 00:32:21.587 { 00:32:21.587 "name": "BaseBdev2", 00:32:21.587 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:21.587 "is_configured": true, 00:32:21.587 "data_offset": 0, 00:32:21.587 "data_size": 65536 00:32:21.587 }, 00:32:21.587 { 00:32:21.587 "name": "BaseBdev3", 00:32:21.587 "uuid": "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16", 00:32:21.587 "is_configured": true, 00:32:21.587 "data_offset": 0, 00:32:21.587 "data_size": 65536 00:32:21.587 }, 00:32:21.588 { 00:32:21.588 "name": "BaseBdev4", 00:32:21.588 "uuid": "2ff7f172-7a8b-45a7-8c3e-8e0f2fb1be36", 00:32:21.588 "is_configured": true, 00:32:21.588 "data_offset": 0, 00:32:21.588 "data_size": 65536 00:32:21.588 } 00:32:21.588 ] 00:32:21.588 } 00:32:21.588 } 00:32:21.588 }' 00:32:21.588 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:21.847 BaseBdev2 00:32:21.847 BaseBdev3 00:32:21.847 BaseBdev4' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.847 17:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.847 [2024-11-26 17:28:51.941358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.105 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.105 "name": "Existed_Raid", 00:32:22.105 "uuid": "c9969586-0dda-48ef-8a41-5394e15e4739", 00:32:22.105 "strip_size_kb": 64, 00:32:22.105 "state": "online", 00:32:22.105 "raid_level": "raid5f", 00:32:22.105 "superblock": false, 00:32:22.105 "num_base_bdevs": 4, 00:32:22.105 "num_base_bdevs_discovered": 3, 00:32:22.105 "num_base_bdevs_operational": 3, 00:32:22.105 "base_bdevs_list": [ 00:32:22.105 { 00:32:22.105 "name": null, 00:32:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.105 "is_configured": false, 00:32:22.105 "data_offset": 0, 00:32:22.105 "data_size": 65536 00:32:22.105 }, 00:32:22.105 { 00:32:22.105 "name": "BaseBdev2", 00:32:22.105 "uuid": "a0462c0d-c975-41a7-9353-2d67bddc8d3b", 00:32:22.105 "is_configured": true, 00:32:22.105 "data_offset": 0, 00:32:22.105 "data_size": 65536 00:32:22.106 }, 00:32:22.106 { 00:32:22.106 "name": "BaseBdev3", 00:32:22.106 "uuid": "c2fc5da9-3ca8-4231-820d-ed62cb6dcd16", 00:32:22.106 "is_configured": true, 00:32:22.106 "data_offset": 0, 00:32:22.106 "data_size": 65536 00:32:22.106 }, 00:32:22.106 { 00:32:22.106 "name": "BaseBdev4", 00:32:22.106 "uuid": "2ff7f172-7a8b-45a7-8c3e-8e0f2fb1be36", 00:32:22.106 "is_configured": true, 00:32:22.106 "data_offset": 0, 00:32:22.106 "data_size": 65536 00:32:22.106 } 00:32:22.106 ] 00:32:22.106 }' 00:32:22.106 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.106 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.681 [2024-11-26 17:28:52.531564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:22.681 [2024-11-26 17:28:52.531693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:22.681 [2024-11-26 17:28:52.632936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.681 [2024-11-26 17:28:52.684907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:22.681 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 [2024-11-26 17:28:52.845717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:22.947 [2024-11-26 17:28:52.845781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 17:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 BaseBdev2 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.206 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.206 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:23.206 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.206 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.206 [ 00:32:23.206 { 00:32:23.206 "name": "BaseBdev2", 00:32:23.206 "aliases": [ 00:32:23.206 "321b2504-e08b-4b98-b16b-b7a8f5b3712d" 00:32:23.206 ], 00:32:23.206 "product_name": "Malloc disk", 00:32:23.206 "block_size": 512, 00:32:23.206 "num_blocks": 65536, 00:32:23.206 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:23.206 "assigned_rate_limits": { 00:32:23.206 "rw_ios_per_sec": 0, 00:32:23.206 "rw_mbytes_per_sec": 0, 00:32:23.206 "r_mbytes_per_sec": 0, 00:32:23.206 "w_mbytes_per_sec": 0 00:32:23.206 }, 00:32:23.206 "claimed": false, 00:32:23.206 "zoned": false, 00:32:23.206 "supported_io_types": { 00:32:23.206 "read": true, 00:32:23.206 "write": true, 00:32:23.206 "unmap": true, 00:32:23.206 "flush": true, 00:32:23.206 "reset": true, 00:32:23.206 "nvme_admin": false, 00:32:23.206 "nvme_io": false, 00:32:23.206 "nvme_io_md": false, 00:32:23.206 "write_zeroes": true, 00:32:23.206 "zcopy": true, 00:32:23.206 "get_zone_info": false, 00:32:23.206 "zone_management": false, 00:32:23.206 "zone_append": false, 00:32:23.206 "compare": false, 00:32:23.206 "compare_and_write": false, 00:32:23.206 "abort": true, 00:32:23.206 "seek_hole": false, 00:32:23.207 "seek_data": false, 00:32:23.207 "copy": true, 00:32:23.207 "nvme_iov_md": false 00:32:23.207 }, 00:32:23.207 "memory_domains": [ 00:32:23.207 { 00:32:23.207 "dma_device_id": "system", 00:32:23.207 "dma_device_type": 1 00:32:23.207 }, 00:32:23.207 { 00:32:23.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.207 "dma_device_type": 2 00:32:23.207 } 00:32:23.207 ], 00:32:23.207 "driver_specific": {} 00:32:23.207 } 00:32:23.207 ] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 BaseBdev3 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 [ 00:32:23.207 { 00:32:23.207 "name": "BaseBdev3", 00:32:23.207 "aliases": [ 00:32:23.207 "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311" 00:32:23.207 ], 00:32:23.207 "product_name": "Malloc disk", 00:32:23.207 "block_size": 512, 00:32:23.207 "num_blocks": 65536, 00:32:23.207 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:23.207 "assigned_rate_limits": { 00:32:23.207 "rw_ios_per_sec": 0, 00:32:23.207 "rw_mbytes_per_sec": 0, 00:32:23.207 "r_mbytes_per_sec": 0, 00:32:23.207 "w_mbytes_per_sec": 0 00:32:23.207 }, 00:32:23.207 "claimed": false, 00:32:23.207 "zoned": false, 00:32:23.207 "supported_io_types": { 00:32:23.207 "read": true, 00:32:23.207 "write": true, 00:32:23.207 "unmap": true, 00:32:23.207 "flush": true, 00:32:23.207 "reset": true, 00:32:23.207 "nvme_admin": false, 00:32:23.207 "nvme_io": false, 00:32:23.207 "nvme_io_md": false, 00:32:23.207 "write_zeroes": true, 00:32:23.207 "zcopy": true, 00:32:23.207 "get_zone_info": false, 00:32:23.207 "zone_management": false, 00:32:23.207 "zone_append": false, 00:32:23.207 "compare": false, 00:32:23.207 "compare_and_write": false, 00:32:23.207 "abort": true, 00:32:23.207 "seek_hole": false, 00:32:23.207 "seek_data": false, 00:32:23.207 "copy": true, 00:32:23.207 "nvme_iov_md": false 00:32:23.207 }, 00:32:23.207 "memory_domains": [ 00:32:23.207 { 00:32:23.207 "dma_device_id": "system", 00:32:23.207 "dma_device_type": 1 00:32:23.207 }, 00:32:23.207 { 00:32:23.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.207 "dma_device_type": 2 00:32:23.207 } 00:32:23.207 ], 00:32:23.207 "driver_specific": {} 00:32:23.207 } 00:32:23.207 ] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 BaseBdev4 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 [ 00:32:23.207 { 00:32:23.207 "name": "BaseBdev4", 00:32:23.207 "aliases": [ 00:32:23.207 "561be712-8259-4c99-985a-73362fd571cb" 00:32:23.207 ], 00:32:23.207 "product_name": "Malloc disk", 00:32:23.207 "block_size": 512, 00:32:23.207 "num_blocks": 65536, 00:32:23.207 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:23.207 "assigned_rate_limits": { 00:32:23.207 "rw_ios_per_sec": 0, 00:32:23.207 "rw_mbytes_per_sec": 0, 00:32:23.207 "r_mbytes_per_sec": 0, 00:32:23.207 "w_mbytes_per_sec": 0 00:32:23.207 }, 00:32:23.207 "claimed": false, 00:32:23.207 "zoned": false, 00:32:23.207 "supported_io_types": { 00:32:23.207 "read": true, 00:32:23.207 "write": true, 00:32:23.207 "unmap": true, 00:32:23.207 "flush": true, 00:32:23.207 "reset": true, 00:32:23.207 "nvme_admin": false, 00:32:23.207 "nvme_io": false, 00:32:23.207 "nvme_io_md": false, 00:32:23.207 "write_zeroes": true, 00:32:23.207 "zcopy": true, 00:32:23.207 "get_zone_info": false, 00:32:23.207 "zone_management": false, 00:32:23.207 "zone_append": false, 00:32:23.207 "compare": false, 00:32:23.207 "compare_and_write": false, 00:32:23.207 "abort": true, 00:32:23.207 "seek_hole": false, 00:32:23.207 "seek_data": false, 00:32:23.207 "copy": true, 00:32:23.207 "nvme_iov_md": false 00:32:23.207 }, 00:32:23.207 "memory_domains": [ 00:32:23.207 { 00:32:23.207 "dma_device_id": "system", 00:32:23.207 "dma_device_type": 1 00:32:23.207 }, 00:32:23.207 { 00:32:23.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.207 "dma_device_type": 2 00:32:23.207 } 00:32:23.207 ], 00:32:23.207 "driver_specific": {} 00:32:23.207 } 00:32:23.207 ] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.207 [2024-11-26 17:28:53.280572] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:23.207 [2024-11-26 17:28:53.280634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:23.207 [2024-11-26 17:28:53.280667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:23.207 [2024-11-26 17:28:53.283045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:23.207 [2024-11-26 17:28:53.283109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:23.207 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.208 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.466 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.466 "name": "Existed_Raid", 00:32:23.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.466 "strip_size_kb": 64, 00:32:23.466 "state": "configuring", 00:32:23.466 "raid_level": "raid5f", 00:32:23.466 "superblock": false, 00:32:23.466 "num_base_bdevs": 4, 00:32:23.466 "num_base_bdevs_discovered": 3, 00:32:23.466 "num_base_bdevs_operational": 4, 00:32:23.466 "base_bdevs_list": [ 00:32:23.466 { 00:32:23.466 "name": "BaseBdev1", 00:32:23.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.466 "is_configured": false, 00:32:23.466 "data_offset": 0, 00:32:23.466 "data_size": 0 00:32:23.466 }, 00:32:23.466 { 00:32:23.466 "name": "BaseBdev2", 00:32:23.466 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:23.466 "is_configured": true, 00:32:23.466 "data_offset": 0, 00:32:23.467 "data_size": 65536 00:32:23.467 }, 00:32:23.467 { 00:32:23.467 "name": "BaseBdev3", 00:32:23.467 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:23.467 "is_configured": true, 00:32:23.467 "data_offset": 0, 00:32:23.467 "data_size": 65536 00:32:23.467 }, 00:32:23.467 { 00:32:23.467 "name": "BaseBdev4", 00:32:23.467 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:23.467 "is_configured": true, 00:32:23.467 "data_offset": 0, 00:32:23.467 "data_size": 65536 00:32:23.467 } 00:32:23.467 ] 00:32:23.467 }' 00:32:23.467 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.467 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.726 [2024-11-26 17:28:53.723951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.726 "name": "Existed_Raid", 00:32:23.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.726 "strip_size_kb": 64, 00:32:23.726 "state": "configuring", 00:32:23.726 "raid_level": "raid5f", 00:32:23.726 "superblock": false, 00:32:23.726 "num_base_bdevs": 4, 00:32:23.726 "num_base_bdevs_discovered": 2, 00:32:23.726 "num_base_bdevs_operational": 4, 00:32:23.726 "base_bdevs_list": [ 00:32:23.726 { 00:32:23.726 "name": "BaseBdev1", 00:32:23.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.726 "is_configured": false, 00:32:23.726 "data_offset": 0, 00:32:23.726 "data_size": 0 00:32:23.726 }, 00:32:23.726 { 00:32:23.726 "name": null, 00:32:23.726 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:23.726 "is_configured": false, 00:32:23.726 "data_offset": 0, 00:32:23.726 "data_size": 65536 00:32:23.726 }, 00:32:23.726 { 00:32:23.726 "name": "BaseBdev3", 00:32:23.726 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:23.726 "is_configured": true, 00:32:23.726 "data_offset": 0, 00:32:23.726 "data_size": 65536 00:32:23.726 }, 00:32:23.726 { 00:32:23.726 "name": "BaseBdev4", 00:32:23.726 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:23.726 "is_configured": true, 00:32:23.726 "data_offset": 0, 00:32:23.726 "data_size": 65536 00:32:23.726 } 00:32:23.726 ] 00:32:23.726 }' 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.726 17:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.292 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.292 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.292 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.292 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.293 [2024-11-26 17:28:54.283824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:24.293 BaseBdev1 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.293 [ 00:32:24.293 { 00:32:24.293 "name": "BaseBdev1", 00:32:24.293 "aliases": [ 00:32:24.293 "a175361c-2ff0-4a48-96f1-5a37b14af811" 00:32:24.293 ], 00:32:24.293 "product_name": "Malloc disk", 00:32:24.293 "block_size": 512, 00:32:24.293 "num_blocks": 65536, 00:32:24.293 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:24.293 "assigned_rate_limits": { 00:32:24.293 "rw_ios_per_sec": 0, 00:32:24.293 "rw_mbytes_per_sec": 0, 00:32:24.293 "r_mbytes_per_sec": 0, 00:32:24.293 "w_mbytes_per_sec": 0 00:32:24.293 }, 00:32:24.293 "claimed": true, 00:32:24.293 "claim_type": "exclusive_write", 00:32:24.293 "zoned": false, 00:32:24.293 "supported_io_types": { 00:32:24.293 "read": true, 00:32:24.293 "write": true, 00:32:24.293 "unmap": true, 00:32:24.293 "flush": true, 00:32:24.293 "reset": true, 00:32:24.293 "nvme_admin": false, 00:32:24.293 "nvme_io": false, 00:32:24.293 "nvme_io_md": false, 00:32:24.293 "write_zeroes": true, 00:32:24.293 "zcopy": true, 00:32:24.293 "get_zone_info": false, 00:32:24.293 "zone_management": false, 00:32:24.293 "zone_append": false, 00:32:24.293 "compare": false, 00:32:24.293 "compare_and_write": false, 00:32:24.293 "abort": true, 00:32:24.293 "seek_hole": false, 00:32:24.293 "seek_data": false, 00:32:24.293 "copy": true, 00:32:24.293 "nvme_iov_md": false 00:32:24.293 }, 00:32:24.293 "memory_domains": [ 00:32:24.293 { 00:32:24.293 "dma_device_id": "system", 00:32:24.293 "dma_device_type": 1 00:32:24.293 }, 00:32:24.293 { 00:32:24.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.293 "dma_device_type": 2 00:32:24.293 } 00:32:24.293 ], 00:32:24.293 "driver_specific": {} 00:32:24.293 } 00:32:24.293 ] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.293 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.293 "name": "Existed_Raid", 00:32:24.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.293 "strip_size_kb": 64, 00:32:24.293 "state": "configuring", 00:32:24.293 "raid_level": "raid5f", 00:32:24.293 "superblock": false, 00:32:24.293 "num_base_bdevs": 4, 00:32:24.293 "num_base_bdevs_discovered": 3, 00:32:24.293 "num_base_bdevs_operational": 4, 00:32:24.293 "base_bdevs_list": [ 00:32:24.293 { 00:32:24.293 "name": "BaseBdev1", 00:32:24.293 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:24.293 "is_configured": true, 00:32:24.293 "data_offset": 0, 00:32:24.293 "data_size": 65536 00:32:24.293 }, 00:32:24.293 { 00:32:24.293 "name": null, 00:32:24.293 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:24.294 "is_configured": false, 00:32:24.294 "data_offset": 0, 00:32:24.294 "data_size": 65536 00:32:24.294 }, 00:32:24.294 { 00:32:24.294 "name": "BaseBdev3", 00:32:24.294 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:24.294 "is_configured": true, 00:32:24.294 "data_offset": 0, 00:32:24.294 "data_size": 65536 00:32:24.294 }, 00:32:24.294 { 00:32:24.294 "name": "BaseBdev4", 00:32:24.294 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:24.294 "is_configured": true, 00:32:24.294 "data_offset": 0, 00:32:24.294 "data_size": 65536 00:32:24.294 } 00:32:24.294 ] 00:32:24.294 }' 00:32:24.294 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.294 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.862 [2024-11-26 17:28:54.759285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.862 "name": "Existed_Raid", 00:32:24.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.862 "strip_size_kb": 64, 00:32:24.862 "state": "configuring", 00:32:24.862 "raid_level": "raid5f", 00:32:24.862 "superblock": false, 00:32:24.862 "num_base_bdevs": 4, 00:32:24.862 "num_base_bdevs_discovered": 2, 00:32:24.862 "num_base_bdevs_operational": 4, 00:32:24.862 "base_bdevs_list": [ 00:32:24.862 { 00:32:24.862 "name": "BaseBdev1", 00:32:24.862 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:24.862 "is_configured": true, 00:32:24.862 "data_offset": 0, 00:32:24.862 "data_size": 65536 00:32:24.862 }, 00:32:24.862 { 00:32:24.862 "name": null, 00:32:24.862 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:24.862 "is_configured": false, 00:32:24.862 "data_offset": 0, 00:32:24.862 "data_size": 65536 00:32:24.862 }, 00:32:24.862 { 00:32:24.862 "name": null, 00:32:24.862 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:24.862 "is_configured": false, 00:32:24.862 "data_offset": 0, 00:32:24.862 "data_size": 65536 00:32:24.862 }, 00:32:24.862 { 00:32:24.862 "name": "BaseBdev4", 00:32:24.862 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:24.862 "is_configured": true, 00:32:24.862 "data_offset": 0, 00:32:24.862 "data_size": 65536 00:32:24.862 } 00:32:24.862 ] 00:32:24.862 }' 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.862 17:28:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.121 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.121 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.121 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:25.121 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.381 [2024-11-26 17:28:55.270577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.381 "name": "Existed_Raid", 00:32:25.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.381 "strip_size_kb": 64, 00:32:25.381 "state": "configuring", 00:32:25.381 "raid_level": "raid5f", 00:32:25.381 "superblock": false, 00:32:25.381 "num_base_bdevs": 4, 00:32:25.381 "num_base_bdevs_discovered": 3, 00:32:25.381 "num_base_bdevs_operational": 4, 00:32:25.381 "base_bdevs_list": [ 00:32:25.381 { 00:32:25.381 "name": "BaseBdev1", 00:32:25.381 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:25.381 "is_configured": true, 00:32:25.381 "data_offset": 0, 00:32:25.381 "data_size": 65536 00:32:25.381 }, 00:32:25.381 { 00:32:25.381 "name": null, 00:32:25.381 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:25.381 "is_configured": false, 00:32:25.381 "data_offset": 0, 00:32:25.381 "data_size": 65536 00:32:25.381 }, 00:32:25.381 { 00:32:25.381 "name": "BaseBdev3", 00:32:25.381 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:25.381 "is_configured": true, 00:32:25.381 "data_offset": 0, 00:32:25.381 "data_size": 65536 00:32:25.381 }, 00:32:25.381 { 00:32:25.381 "name": "BaseBdev4", 00:32:25.381 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:25.381 "is_configured": true, 00:32:25.381 "data_offset": 0, 00:32:25.381 "data_size": 65536 00:32:25.381 } 00:32:25.381 ] 00:32:25.381 }' 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.381 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.640 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.640 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.640 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.640 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.898 [2024-11-26 17:28:55.797884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.898 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.899 "name": "Existed_Raid", 00:32:25.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.899 "strip_size_kb": 64, 00:32:25.899 "state": "configuring", 00:32:25.899 "raid_level": "raid5f", 00:32:25.899 "superblock": false, 00:32:25.899 "num_base_bdevs": 4, 00:32:25.899 "num_base_bdevs_discovered": 2, 00:32:25.899 "num_base_bdevs_operational": 4, 00:32:25.899 "base_bdevs_list": [ 00:32:25.899 { 00:32:25.899 "name": null, 00:32:25.899 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:25.899 "is_configured": false, 00:32:25.899 "data_offset": 0, 00:32:25.899 "data_size": 65536 00:32:25.899 }, 00:32:25.899 { 00:32:25.899 "name": null, 00:32:25.899 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:25.899 "is_configured": false, 00:32:25.899 "data_offset": 0, 00:32:25.899 "data_size": 65536 00:32:25.899 }, 00:32:25.899 { 00:32:25.899 "name": "BaseBdev3", 00:32:25.899 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:25.899 "is_configured": true, 00:32:25.899 "data_offset": 0, 00:32:25.899 "data_size": 65536 00:32:25.899 }, 00:32:25.899 { 00:32:25.899 "name": "BaseBdev4", 00:32:25.899 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:25.899 "is_configured": true, 00:32:25.899 "data_offset": 0, 00:32:25.899 "data_size": 65536 00:32:25.899 } 00:32:25.899 ] 00:32:25.899 }' 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.899 17:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.468 [2024-11-26 17:28:56.394875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.468 "name": "Existed_Raid", 00:32:26.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.468 "strip_size_kb": 64, 00:32:26.468 "state": "configuring", 00:32:26.468 "raid_level": "raid5f", 00:32:26.468 "superblock": false, 00:32:26.468 "num_base_bdevs": 4, 00:32:26.468 "num_base_bdevs_discovered": 3, 00:32:26.468 "num_base_bdevs_operational": 4, 00:32:26.468 "base_bdevs_list": [ 00:32:26.468 { 00:32:26.468 "name": null, 00:32:26.468 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:26.468 "is_configured": false, 00:32:26.468 "data_offset": 0, 00:32:26.468 "data_size": 65536 00:32:26.468 }, 00:32:26.468 { 00:32:26.468 "name": "BaseBdev2", 00:32:26.468 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:26.468 "is_configured": true, 00:32:26.468 "data_offset": 0, 00:32:26.468 "data_size": 65536 00:32:26.468 }, 00:32:26.468 { 00:32:26.468 "name": "BaseBdev3", 00:32:26.468 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:26.468 "is_configured": true, 00:32:26.468 "data_offset": 0, 00:32:26.468 "data_size": 65536 00:32:26.468 }, 00:32:26.468 { 00:32:26.468 "name": "BaseBdev4", 00:32:26.468 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:26.468 "is_configured": true, 00:32:26.468 "data_offset": 0, 00:32:26.468 "data_size": 65536 00:32:26.468 } 00:32:26.468 ] 00:32:26.468 }' 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.468 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.735 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.735 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:26.735 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.735 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a175361c-2ff0-4a48-96f1-5a37b14af811 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.996 [2024-11-26 17:28:56.979256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:26.996 [2024-11-26 17:28:56.979335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:26.996 [2024-11-26 17:28:56.979347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:26.996 [2024-11-26 17:28:56.979717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:26.996 [2024-11-26 17:28:56.987926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:26.996 [2024-11-26 17:28:56.987962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:32:26.996 [2024-11-26 17:28:56.988267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.996 NewBaseBdev 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.996 17:28:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.996 [ 00:32:26.996 { 00:32:26.996 "name": "NewBaseBdev", 00:32:26.996 "aliases": [ 00:32:26.996 "a175361c-2ff0-4a48-96f1-5a37b14af811" 00:32:26.996 ], 00:32:26.996 "product_name": "Malloc disk", 00:32:26.996 "block_size": 512, 00:32:26.996 "num_blocks": 65536, 00:32:26.996 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:26.996 "assigned_rate_limits": { 00:32:26.996 "rw_ios_per_sec": 0, 00:32:26.996 "rw_mbytes_per_sec": 0, 00:32:26.996 "r_mbytes_per_sec": 0, 00:32:26.996 "w_mbytes_per_sec": 0 00:32:26.996 }, 00:32:26.996 "claimed": true, 00:32:26.996 "claim_type": "exclusive_write", 00:32:26.996 "zoned": false, 00:32:26.996 "supported_io_types": { 00:32:26.996 "read": true, 00:32:26.996 "write": true, 00:32:26.996 "unmap": true, 00:32:26.996 "flush": true, 00:32:26.996 "reset": true, 00:32:26.996 "nvme_admin": false, 00:32:26.996 "nvme_io": false, 00:32:26.996 "nvme_io_md": false, 00:32:26.996 "write_zeroes": true, 00:32:26.996 "zcopy": true, 00:32:26.996 "get_zone_info": false, 00:32:26.996 "zone_management": false, 00:32:26.996 "zone_append": false, 00:32:26.996 "compare": false, 00:32:26.996 "compare_and_write": false, 00:32:26.996 "abort": true, 00:32:26.996 "seek_hole": false, 00:32:26.996 "seek_data": false, 00:32:26.996 "copy": true, 00:32:26.996 "nvme_iov_md": false 00:32:26.996 }, 00:32:26.996 "memory_domains": [ 00:32:26.996 { 00:32:26.996 "dma_device_id": "system", 00:32:26.996 "dma_device_type": 1 00:32:26.996 }, 00:32:26.996 { 00:32:26.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.996 "dma_device_type": 2 00:32:26.996 } 00:32:26.996 ], 00:32:26.996 "driver_specific": {} 00:32:26.996 } 00:32:26.996 ] 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.996 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.997 "name": "Existed_Raid", 00:32:26.997 "uuid": "095b3be0-3116-4c5c-8d6d-525be72ca0c6", 00:32:26.997 "strip_size_kb": 64, 00:32:26.997 "state": "online", 00:32:26.997 "raid_level": "raid5f", 00:32:26.997 "superblock": false, 00:32:26.997 "num_base_bdevs": 4, 00:32:26.997 "num_base_bdevs_discovered": 4, 00:32:26.997 "num_base_bdevs_operational": 4, 00:32:26.997 "base_bdevs_list": [ 00:32:26.997 { 00:32:26.997 "name": "NewBaseBdev", 00:32:26.997 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:26.997 "is_configured": true, 00:32:26.997 "data_offset": 0, 00:32:26.997 "data_size": 65536 00:32:26.997 }, 00:32:26.997 { 00:32:26.997 "name": "BaseBdev2", 00:32:26.997 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:26.997 "is_configured": true, 00:32:26.997 "data_offset": 0, 00:32:26.997 "data_size": 65536 00:32:26.997 }, 00:32:26.997 { 00:32:26.997 "name": "BaseBdev3", 00:32:26.997 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:26.997 "is_configured": true, 00:32:26.997 "data_offset": 0, 00:32:26.997 "data_size": 65536 00:32:26.997 }, 00:32:26.997 { 00:32:26.997 "name": "BaseBdev4", 00:32:26.997 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:26.997 "is_configured": true, 00:32:26.997 "data_offset": 0, 00:32:26.997 "data_size": 65536 00:32:26.997 } 00:32:26.997 ] 00:32:26.997 }' 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.997 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.564 [2024-11-26 17:28:57.513271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.564 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.564 "name": "Existed_Raid", 00:32:27.564 "aliases": [ 00:32:27.564 "095b3be0-3116-4c5c-8d6d-525be72ca0c6" 00:32:27.564 ], 00:32:27.564 "product_name": "Raid Volume", 00:32:27.564 "block_size": 512, 00:32:27.564 "num_blocks": 196608, 00:32:27.564 "uuid": "095b3be0-3116-4c5c-8d6d-525be72ca0c6", 00:32:27.564 "assigned_rate_limits": { 00:32:27.564 "rw_ios_per_sec": 0, 00:32:27.564 "rw_mbytes_per_sec": 0, 00:32:27.564 "r_mbytes_per_sec": 0, 00:32:27.565 "w_mbytes_per_sec": 0 00:32:27.565 }, 00:32:27.565 "claimed": false, 00:32:27.565 "zoned": false, 00:32:27.565 "supported_io_types": { 00:32:27.565 "read": true, 00:32:27.565 "write": true, 00:32:27.565 "unmap": false, 00:32:27.565 "flush": false, 00:32:27.565 "reset": true, 00:32:27.565 "nvme_admin": false, 00:32:27.565 "nvme_io": false, 00:32:27.565 "nvme_io_md": false, 00:32:27.565 "write_zeroes": true, 00:32:27.565 "zcopy": false, 00:32:27.565 "get_zone_info": false, 00:32:27.565 "zone_management": false, 00:32:27.565 "zone_append": false, 00:32:27.565 "compare": false, 00:32:27.565 "compare_and_write": false, 00:32:27.565 "abort": false, 00:32:27.565 "seek_hole": false, 00:32:27.565 "seek_data": false, 00:32:27.565 "copy": false, 00:32:27.565 "nvme_iov_md": false 00:32:27.565 }, 00:32:27.565 "driver_specific": { 00:32:27.565 "raid": { 00:32:27.565 "uuid": "095b3be0-3116-4c5c-8d6d-525be72ca0c6", 00:32:27.565 "strip_size_kb": 64, 00:32:27.565 "state": "online", 00:32:27.565 "raid_level": "raid5f", 00:32:27.565 "superblock": false, 00:32:27.565 "num_base_bdevs": 4, 00:32:27.565 "num_base_bdevs_discovered": 4, 00:32:27.565 "num_base_bdevs_operational": 4, 00:32:27.565 "base_bdevs_list": [ 00:32:27.565 { 00:32:27.565 "name": "NewBaseBdev", 00:32:27.565 "uuid": "a175361c-2ff0-4a48-96f1-5a37b14af811", 00:32:27.565 "is_configured": true, 00:32:27.565 "data_offset": 0, 00:32:27.565 "data_size": 65536 00:32:27.565 }, 00:32:27.565 { 00:32:27.565 "name": "BaseBdev2", 00:32:27.565 "uuid": "321b2504-e08b-4b98-b16b-b7a8f5b3712d", 00:32:27.565 "is_configured": true, 00:32:27.565 "data_offset": 0, 00:32:27.565 "data_size": 65536 00:32:27.565 }, 00:32:27.565 { 00:32:27.565 "name": "BaseBdev3", 00:32:27.565 "uuid": "0a7d3c75-fd2f-44a6-a051-2d7ddb32f311", 00:32:27.565 "is_configured": true, 00:32:27.565 "data_offset": 0, 00:32:27.565 "data_size": 65536 00:32:27.565 }, 00:32:27.565 { 00:32:27.565 "name": "BaseBdev4", 00:32:27.565 "uuid": "561be712-8259-4c99-985a-73362fd571cb", 00:32:27.565 "is_configured": true, 00:32:27.565 "data_offset": 0, 00:32:27.565 "data_size": 65536 00:32:27.565 } 00:32:27.565 ] 00:32:27.565 } 00:32:27.565 } 00:32:27.565 }' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:27.565 BaseBdev2 00:32:27.565 BaseBdev3 00:32:27.565 BaseBdev4' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.565 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.824 [2024-11-26 17:28:57.832589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:27.824 [2024-11-26 17:28:57.832625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.824 [2024-11-26 17:28:57.832729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.824 [2024-11-26 17:28:57.833087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.824 [2024-11-26 17:28:57.833104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82937 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82937 ']' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82937 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82937 00:32:27.824 killing process with pid 82937 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82937' 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82937 00:32:27.824 [2024-11-26 17:28:57.880103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:27.824 17:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82937 00:32:28.392 [2024-11-26 17:28:58.320880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:29.784 17:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:29.784 00:32:29.784 real 0m11.918s 00:32:29.784 user 0m18.653s 00:32:29.784 sys 0m2.595s 00:32:29.784 ************************************ 00:32:29.784 END TEST raid5f_state_function_test 00:32:29.784 ************************************ 00:32:29.784 17:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.784 17:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.784 17:28:59 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:32:29.784 17:28:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:29.784 17:28:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.784 17:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:29.784 ************************************ 00:32:29.784 START TEST raid5f_state_function_test_sb 00:32:29.784 ************************************ 00:32:29.784 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83610 00:32:29.785 Process raid pid: 83610 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83610' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83610 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83610 ']' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:29.785 17:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:29.785 [2024-11-26 17:28:59.810965] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:32:29.785 [2024-11-26 17:28:59.811151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.043 [2024-11-26 17:29:00.011497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.301 [2024-11-26 17:29:00.178278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.561 [2024-11-26 17:29:00.431212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.561 [2024-11-26 17:29:00.431261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.561 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.561 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:30.561 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:30.561 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.561 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.821 [2024-11-26 17:29:00.676335] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:30.821 [2024-11-26 17:29:00.676419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:30.821 [2024-11-26 17:29:00.676433] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:30.821 [2024-11-26 17:29:00.676446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:30.821 [2024-11-26 17:29:00.676454] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:30.821 [2024-11-26 17:29:00.676467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:30.821 [2024-11-26 17:29:00.676476] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:30.821 [2024-11-26 17:29:00.676490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.821 "name": "Existed_Raid", 00:32:30.821 "uuid": "635a40f5-984a-446a-a2d9-5d601471705e", 00:32:30.821 "strip_size_kb": 64, 00:32:30.821 "state": "configuring", 00:32:30.821 "raid_level": "raid5f", 00:32:30.821 "superblock": true, 00:32:30.821 "num_base_bdevs": 4, 00:32:30.821 "num_base_bdevs_discovered": 0, 00:32:30.821 "num_base_bdevs_operational": 4, 00:32:30.821 "base_bdevs_list": [ 00:32:30.821 { 00:32:30.821 "name": "BaseBdev1", 00:32:30.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.821 "is_configured": false, 00:32:30.821 "data_offset": 0, 00:32:30.821 "data_size": 0 00:32:30.821 }, 00:32:30.821 { 00:32:30.821 "name": "BaseBdev2", 00:32:30.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.821 "is_configured": false, 00:32:30.821 "data_offset": 0, 00:32:30.821 "data_size": 0 00:32:30.821 }, 00:32:30.821 { 00:32:30.821 "name": "BaseBdev3", 00:32:30.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.821 "is_configured": false, 00:32:30.821 "data_offset": 0, 00:32:30.821 "data_size": 0 00:32:30.821 }, 00:32:30.821 { 00:32:30.821 "name": "BaseBdev4", 00:32:30.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.821 "is_configured": false, 00:32:30.821 "data_offset": 0, 00:32:30.821 "data_size": 0 00:32:30.821 } 00:32:30.821 ] 00:32:30.821 }' 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.821 17:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.080 [2024-11-26 17:29:01.123682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:31.080 [2024-11-26 17:29:01.123738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.080 [2024-11-26 17:29:01.135640] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:31.080 [2024-11-26 17:29:01.135690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:31.080 [2024-11-26 17:29:01.135702] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:31.080 [2024-11-26 17:29:01.135716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:31.080 [2024-11-26 17:29:01.135742] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:31.080 [2024-11-26 17:29:01.135755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:31.080 [2024-11-26 17:29:01.135764] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:31.080 [2024-11-26 17:29:01.135777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.080 [2024-11-26 17:29:01.184008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:31.080 BaseBdev1 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.080 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.339 [ 00:32:31.339 { 00:32:31.339 "name": "BaseBdev1", 00:32:31.339 "aliases": [ 00:32:31.339 "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42" 00:32:31.339 ], 00:32:31.339 "product_name": "Malloc disk", 00:32:31.339 "block_size": 512, 00:32:31.339 "num_blocks": 65536, 00:32:31.339 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:31.339 "assigned_rate_limits": { 00:32:31.339 "rw_ios_per_sec": 0, 00:32:31.339 "rw_mbytes_per_sec": 0, 00:32:31.339 "r_mbytes_per_sec": 0, 00:32:31.339 "w_mbytes_per_sec": 0 00:32:31.339 }, 00:32:31.339 "claimed": true, 00:32:31.339 "claim_type": "exclusive_write", 00:32:31.339 "zoned": false, 00:32:31.339 "supported_io_types": { 00:32:31.339 "read": true, 00:32:31.339 "write": true, 00:32:31.339 "unmap": true, 00:32:31.339 "flush": true, 00:32:31.339 "reset": true, 00:32:31.339 "nvme_admin": false, 00:32:31.339 "nvme_io": false, 00:32:31.339 "nvme_io_md": false, 00:32:31.339 "write_zeroes": true, 00:32:31.339 "zcopy": true, 00:32:31.339 "get_zone_info": false, 00:32:31.339 "zone_management": false, 00:32:31.339 "zone_append": false, 00:32:31.339 "compare": false, 00:32:31.339 "compare_and_write": false, 00:32:31.339 "abort": true, 00:32:31.339 "seek_hole": false, 00:32:31.339 "seek_data": false, 00:32:31.339 "copy": true, 00:32:31.339 "nvme_iov_md": false 00:32:31.339 }, 00:32:31.339 "memory_domains": [ 00:32:31.339 { 00:32:31.339 "dma_device_id": "system", 00:32:31.339 "dma_device_type": 1 00:32:31.339 }, 00:32:31.339 { 00:32:31.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.339 "dma_device_type": 2 00:32:31.339 } 00:32:31.339 ], 00:32:31.339 "driver_specific": {} 00:32:31.339 } 00:32:31.339 ] 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.339 "name": "Existed_Raid", 00:32:31.339 "uuid": "9ff0f6bf-29c6-4f2d-ae01-b6b3b0366ce9", 00:32:31.339 "strip_size_kb": 64, 00:32:31.339 "state": "configuring", 00:32:31.339 "raid_level": "raid5f", 00:32:31.339 "superblock": true, 00:32:31.339 "num_base_bdevs": 4, 00:32:31.339 "num_base_bdevs_discovered": 1, 00:32:31.339 "num_base_bdevs_operational": 4, 00:32:31.339 "base_bdevs_list": [ 00:32:31.339 { 00:32:31.339 "name": "BaseBdev1", 00:32:31.339 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:31.339 "is_configured": true, 00:32:31.339 "data_offset": 2048, 00:32:31.339 "data_size": 63488 00:32:31.339 }, 00:32:31.339 { 00:32:31.339 "name": "BaseBdev2", 00:32:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.339 "is_configured": false, 00:32:31.339 "data_offset": 0, 00:32:31.339 "data_size": 0 00:32:31.339 }, 00:32:31.339 { 00:32:31.339 "name": "BaseBdev3", 00:32:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.339 "is_configured": false, 00:32:31.339 "data_offset": 0, 00:32:31.339 "data_size": 0 00:32:31.339 }, 00:32:31.339 { 00:32:31.339 "name": "BaseBdev4", 00:32:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.339 "is_configured": false, 00:32:31.339 "data_offset": 0, 00:32:31.339 "data_size": 0 00:32:31.339 } 00:32:31.339 ] 00:32:31.339 }' 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.339 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.598 [2024-11-26 17:29:01.675426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:31.598 [2024-11-26 17:29:01.675499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.598 [2024-11-26 17:29:01.683484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:31.598 [2024-11-26 17:29:01.685813] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:31.598 [2024-11-26 17:29:01.685864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:31.598 [2024-11-26 17:29:01.685893] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:31.598 [2024-11-26 17:29:01.685909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:31.598 [2024-11-26 17:29:01.685918] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:31.598 [2024-11-26 17:29:01.685930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.598 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.856 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.856 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.856 "name": "Existed_Raid", 00:32:31.856 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:31.856 "strip_size_kb": 64, 00:32:31.856 "state": "configuring", 00:32:31.856 "raid_level": "raid5f", 00:32:31.856 "superblock": true, 00:32:31.856 "num_base_bdevs": 4, 00:32:31.856 "num_base_bdevs_discovered": 1, 00:32:31.856 "num_base_bdevs_operational": 4, 00:32:31.856 "base_bdevs_list": [ 00:32:31.856 { 00:32:31.856 "name": "BaseBdev1", 00:32:31.856 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:31.856 "is_configured": true, 00:32:31.856 "data_offset": 2048, 00:32:31.856 "data_size": 63488 00:32:31.856 }, 00:32:31.856 { 00:32:31.856 "name": "BaseBdev2", 00:32:31.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.856 "is_configured": false, 00:32:31.856 "data_offset": 0, 00:32:31.856 "data_size": 0 00:32:31.856 }, 00:32:31.856 { 00:32:31.856 "name": "BaseBdev3", 00:32:31.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.856 "is_configured": false, 00:32:31.856 "data_offset": 0, 00:32:31.856 "data_size": 0 00:32:31.856 }, 00:32:31.856 { 00:32:31.856 "name": "BaseBdev4", 00:32:31.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.856 "is_configured": false, 00:32:31.856 "data_offset": 0, 00:32:31.856 "data_size": 0 00:32:31.856 } 00:32:31.856 ] 00:32:31.856 }' 00:32:31.856 17:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.856 17:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.115 [2024-11-26 17:29:02.087412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:32.115 BaseBdev2 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.115 [ 00:32:32.115 { 00:32:32.115 "name": "BaseBdev2", 00:32:32.115 "aliases": [ 00:32:32.115 "a3273838-22a2-4b09-aa34-b9b14d0497ee" 00:32:32.115 ], 00:32:32.115 "product_name": "Malloc disk", 00:32:32.115 "block_size": 512, 00:32:32.115 "num_blocks": 65536, 00:32:32.115 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:32.115 "assigned_rate_limits": { 00:32:32.115 "rw_ios_per_sec": 0, 00:32:32.115 "rw_mbytes_per_sec": 0, 00:32:32.115 "r_mbytes_per_sec": 0, 00:32:32.115 "w_mbytes_per_sec": 0 00:32:32.115 }, 00:32:32.115 "claimed": true, 00:32:32.115 "claim_type": "exclusive_write", 00:32:32.115 "zoned": false, 00:32:32.115 "supported_io_types": { 00:32:32.115 "read": true, 00:32:32.115 "write": true, 00:32:32.115 "unmap": true, 00:32:32.115 "flush": true, 00:32:32.115 "reset": true, 00:32:32.115 "nvme_admin": false, 00:32:32.115 "nvme_io": false, 00:32:32.115 "nvme_io_md": false, 00:32:32.115 "write_zeroes": true, 00:32:32.115 "zcopy": true, 00:32:32.115 "get_zone_info": false, 00:32:32.115 "zone_management": false, 00:32:32.115 "zone_append": false, 00:32:32.115 "compare": false, 00:32:32.115 "compare_and_write": false, 00:32:32.115 "abort": true, 00:32:32.115 "seek_hole": false, 00:32:32.115 "seek_data": false, 00:32:32.115 "copy": true, 00:32:32.115 "nvme_iov_md": false 00:32:32.115 }, 00:32:32.115 "memory_domains": [ 00:32:32.115 { 00:32:32.115 "dma_device_id": "system", 00:32:32.115 "dma_device_type": 1 00:32:32.115 }, 00:32:32.115 { 00:32:32.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.115 "dma_device_type": 2 00:32:32.115 } 00:32:32.115 ], 00:32:32.115 "driver_specific": {} 00:32:32.115 } 00:32:32.115 ] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.115 "name": "Existed_Raid", 00:32:32.115 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:32.115 "strip_size_kb": 64, 00:32:32.115 "state": "configuring", 00:32:32.115 "raid_level": "raid5f", 00:32:32.115 "superblock": true, 00:32:32.115 "num_base_bdevs": 4, 00:32:32.115 "num_base_bdevs_discovered": 2, 00:32:32.115 "num_base_bdevs_operational": 4, 00:32:32.115 "base_bdevs_list": [ 00:32:32.115 { 00:32:32.115 "name": "BaseBdev1", 00:32:32.115 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:32.115 "is_configured": true, 00:32:32.115 "data_offset": 2048, 00:32:32.115 "data_size": 63488 00:32:32.115 }, 00:32:32.115 { 00:32:32.115 "name": "BaseBdev2", 00:32:32.115 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:32.115 "is_configured": true, 00:32:32.115 "data_offset": 2048, 00:32:32.115 "data_size": 63488 00:32:32.115 }, 00:32:32.115 { 00:32:32.115 "name": "BaseBdev3", 00:32:32.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.115 "is_configured": false, 00:32:32.115 "data_offset": 0, 00:32:32.115 "data_size": 0 00:32:32.115 }, 00:32:32.115 { 00:32:32.115 "name": "BaseBdev4", 00:32:32.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.115 "is_configured": false, 00:32:32.115 "data_offset": 0, 00:32:32.115 "data_size": 0 00:32:32.115 } 00:32:32.115 ] 00:32:32.115 }' 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.115 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.682 [2024-11-26 17:29:02.602947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:32.682 BaseBdev3 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:32.682 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.683 [ 00:32:32.683 { 00:32:32.683 "name": "BaseBdev3", 00:32:32.683 "aliases": [ 00:32:32.683 "badb38d6-8bdd-43fa-8dac-d8381a50f4b2" 00:32:32.683 ], 00:32:32.683 "product_name": "Malloc disk", 00:32:32.683 "block_size": 512, 00:32:32.683 "num_blocks": 65536, 00:32:32.683 "uuid": "badb38d6-8bdd-43fa-8dac-d8381a50f4b2", 00:32:32.683 "assigned_rate_limits": { 00:32:32.683 "rw_ios_per_sec": 0, 00:32:32.683 "rw_mbytes_per_sec": 0, 00:32:32.683 "r_mbytes_per_sec": 0, 00:32:32.683 "w_mbytes_per_sec": 0 00:32:32.683 }, 00:32:32.683 "claimed": true, 00:32:32.683 "claim_type": "exclusive_write", 00:32:32.683 "zoned": false, 00:32:32.683 "supported_io_types": { 00:32:32.683 "read": true, 00:32:32.683 "write": true, 00:32:32.683 "unmap": true, 00:32:32.683 "flush": true, 00:32:32.683 "reset": true, 00:32:32.683 "nvme_admin": false, 00:32:32.683 "nvme_io": false, 00:32:32.683 "nvme_io_md": false, 00:32:32.683 "write_zeroes": true, 00:32:32.683 "zcopy": true, 00:32:32.683 "get_zone_info": false, 00:32:32.683 "zone_management": false, 00:32:32.683 "zone_append": false, 00:32:32.683 "compare": false, 00:32:32.683 "compare_and_write": false, 00:32:32.683 "abort": true, 00:32:32.683 "seek_hole": false, 00:32:32.683 "seek_data": false, 00:32:32.683 "copy": true, 00:32:32.683 "nvme_iov_md": false 00:32:32.683 }, 00:32:32.683 "memory_domains": [ 00:32:32.683 { 00:32:32.683 "dma_device_id": "system", 00:32:32.683 "dma_device_type": 1 00:32:32.683 }, 00:32:32.683 { 00:32:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.683 "dma_device_type": 2 00:32:32.683 } 00:32:32.683 ], 00:32:32.683 "driver_specific": {} 00:32:32.683 } 00:32:32.683 ] 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.683 "name": "Existed_Raid", 00:32:32.683 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:32.683 "strip_size_kb": 64, 00:32:32.683 "state": "configuring", 00:32:32.683 "raid_level": "raid5f", 00:32:32.683 "superblock": true, 00:32:32.683 "num_base_bdevs": 4, 00:32:32.683 "num_base_bdevs_discovered": 3, 00:32:32.683 "num_base_bdevs_operational": 4, 00:32:32.683 "base_bdevs_list": [ 00:32:32.683 { 00:32:32.683 "name": "BaseBdev1", 00:32:32.683 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:32.683 "is_configured": true, 00:32:32.683 "data_offset": 2048, 00:32:32.683 "data_size": 63488 00:32:32.683 }, 00:32:32.683 { 00:32:32.683 "name": "BaseBdev2", 00:32:32.683 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:32.683 "is_configured": true, 00:32:32.683 "data_offset": 2048, 00:32:32.683 "data_size": 63488 00:32:32.683 }, 00:32:32.683 { 00:32:32.683 "name": "BaseBdev3", 00:32:32.683 "uuid": "badb38d6-8bdd-43fa-8dac-d8381a50f4b2", 00:32:32.683 "is_configured": true, 00:32:32.683 "data_offset": 2048, 00:32:32.683 "data_size": 63488 00:32:32.683 }, 00:32:32.683 { 00:32:32.683 "name": "BaseBdev4", 00:32:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.683 "is_configured": false, 00:32:32.683 "data_offset": 0, 00:32:32.683 "data_size": 0 00:32:32.683 } 00:32:32.683 ] 00:32:32.683 }' 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.683 17:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.250 [2024-11-26 17:29:03.102943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:33.250 [2024-11-26 17:29:03.103279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:33.250 [2024-11-26 17:29:03.103297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:33.250 [2024-11-26 17:29:03.103635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:33.250 BaseBdev4 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:33.250 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.251 [2024-11-26 17:29:03.111707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:33.251 [2024-11-26 17:29:03.111738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:33.251 [2024-11-26 17:29:03.112017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.251 [ 00:32:33.251 { 00:32:33.251 "name": "BaseBdev4", 00:32:33.251 "aliases": [ 00:32:33.251 "e55156a8-ac91-4383-8b7e-dd486c11c288" 00:32:33.251 ], 00:32:33.251 "product_name": "Malloc disk", 00:32:33.251 "block_size": 512, 00:32:33.251 "num_blocks": 65536, 00:32:33.251 "uuid": "e55156a8-ac91-4383-8b7e-dd486c11c288", 00:32:33.251 "assigned_rate_limits": { 00:32:33.251 "rw_ios_per_sec": 0, 00:32:33.251 "rw_mbytes_per_sec": 0, 00:32:33.251 "r_mbytes_per_sec": 0, 00:32:33.251 "w_mbytes_per_sec": 0 00:32:33.251 }, 00:32:33.251 "claimed": true, 00:32:33.251 "claim_type": "exclusive_write", 00:32:33.251 "zoned": false, 00:32:33.251 "supported_io_types": { 00:32:33.251 "read": true, 00:32:33.251 "write": true, 00:32:33.251 "unmap": true, 00:32:33.251 "flush": true, 00:32:33.251 "reset": true, 00:32:33.251 "nvme_admin": false, 00:32:33.251 "nvme_io": false, 00:32:33.251 "nvme_io_md": false, 00:32:33.251 "write_zeroes": true, 00:32:33.251 "zcopy": true, 00:32:33.251 "get_zone_info": false, 00:32:33.251 "zone_management": false, 00:32:33.251 "zone_append": false, 00:32:33.251 "compare": false, 00:32:33.251 "compare_and_write": false, 00:32:33.251 "abort": true, 00:32:33.251 "seek_hole": false, 00:32:33.251 "seek_data": false, 00:32:33.251 "copy": true, 00:32:33.251 "nvme_iov_md": false 00:32:33.251 }, 00:32:33.251 "memory_domains": [ 00:32:33.251 { 00:32:33.251 "dma_device_id": "system", 00:32:33.251 "dma_device_type": 1 00:32:33.251 }, 00:32:33.251 { 00:32:33.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:33.251 "dma_device_type": 2 00:32:33.251 } 00:32:33.251 ], 00:32:33.251 "driver_specific": {} 00:32:33.251 } 00:32:33.251 ] 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.251 "name": "Existed_Raid", 00:32:33.251 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:33.251 "strip_size_kb": 64, 00:32:33.251 "state": "online", 00:32:33.251 "raid_level": "raid5f", 00:32:33.251 "superblock": true, 00:32:33.251 "num_base_bdevs": 4, 00:32:33.251 "num_base_bdevs_discovered": 4, 00:32:33.251 "num_base_bdevs_operational": 4, 00:32:33.251 "base_bdevs_list": [ 00:32:33.251 { 00:32:33.251 "name": "BaseBdev1", 00:32:33.251 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:33.251 "is_configured": true, 00:32:33.251 "data_offset": 2048, 00:32:33.251 "data_size": 63488 00:32:33.251 }, 00:32:33.251 { 00:32:33.251 "name": "BaseBdev2", 00:32:33.251 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:33.251 "is_configured": true, 00:32:33.251 "data_offset": 2048, 00:32:33.251 "data_size": 63488 00:32:33.251 }, 00:32:33.251 { 00:32:33.251 "name": "BaseBdev3", 00:32:33.251 "uuid": "badb38d6-8bdd-43fa-8dac-d8381a50f4b2", 00:32:33.251 "is_configured": true, 00:32:33.251 "data_offset": 2048, 00:32:33.251 "data_size": 63488 00:32:33.251 }, 00:32:33.251 { 00:32:33.251 "name": "BaseBdev4", 00:32:33.251 "uuid": "e55156a8-ac91-4383-8b7e-dd486c11c288", 00:32:33.251 "is_configured": true, 00:32:33.251 "data_offset": 2048, 00:32:33.251 "data_size": 63488 00:32:33.251 } 00:32:33.251 ] 00:32:33.251 }' 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.251 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.510 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.510 [2024-11-26 17:29:03.620189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:33.769 "name": "Existed_Raid", 00:32:33.769 "aliases": [ 00:32:33.769 "36ab5f36-0d04-4707-956c-37e020afd7ec" 00:32:33.769 ], 00:32:33.769 "product_name": "Raid Volume", 00:32:33.769 "block_size": 512, 00:32:33.769 "num_blocks": 190464, 00:32:33.769 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:33.769 "assigned_rate_limits": { 00:32:33.769 "rw_ios_per_sec": 0, 00:32:33.769 "rw_mbytes_per_sec": 0, 00:32:33.769 "r_mbytes_per_sec": 0, 00:32:33.769 "w_mbytes_per_sec": 0 00:32:33.769 }, 00:32:33.769 "claimed": false, 00:32:33.769 "zoned": false, 00:32:33.769 "supported_io_types": { 00:32:33.769 "read": true, 00:32:33.769 "write": true, 00:32:33.769 "unmap": false, 00:32:33.769 "flush": false, 00:32:33.769 "reset": true, 00:32:33.769 "nvme_admin": false, 00:32:33.769 "nvme_io": false, 00:32:33.769 "nvme_io_md": false, 00:32:33.769 "write_zeroes": true, 00:32:33.769 "zcopy": false, 00:32:33.769 "get_zone_info": false, 00:32:33.769 "zone_management": false, 00:32:33.769 "zone_append": false, 00:32:33.769 "compare": false, 00:32:33.769 "compare_and_write": false, 00:32:33.769 "abort": false, 00:32:33.769 "seek_hole": false, 00:32:33.769 "seek_data": false, 00:32:33.769 "copy": false, 00:32:33.769 "nvme_iov_md": false 00:32:33.769 }, 00:32:33.769 "driver_specific": { 00:32:33.769 "raid": { 00:32:33.769 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:33.769 "strip_size_kb": 64, 00:32:33.769 "state": "online", 00:32:33.769 "raid_level": "raid5f", 00:32:33.769 "superblock": true, 00:32:33.769 "num_base_bdevs": 4, 00:32:33.769 "num_base_bdevs_discovered": 4, 00:32:33.769 "num_base_bdevs_operational": 4, 00:32:33.769 "base_bdevs_list": [ 00:32:33.769 { 00:32:33.769 "name": "BaseBdev1", 00:32:33.769 "uuid": "a1c005c7-d2f4-4f0f-96bf-49f6c8984c42", 00:32:33.769 "is_configured": true, 00:32:33.769 "data_offset": 2048, 00:32:33.769 "data_size": 63488 00:32:33.769 }, 00:32:33.769 { 00:32:33.769 "name": "BaseBdev2", 00:32:33.769 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:33.769 "is_configured": true, 00:32:33.769 "data_offset": 2048, 00:32:33.769 "data_size": 63488 00:32:33.769 }, 00:32:33.769 { 00:32:33.769 "name": "BaseBdev3", 00:32:33.769 "uuid": "badb38d6-8bdd-43fa-8dac-d8381a50f4b2", 00:32:33.769 "is_configured": true, 00:32:33.769 "data_offset": 2048, 00:32:33.769 "data_size": 63488 00:32:33.769 }, 00:32:33.769 { 00:32:33.769 "name": "BaseBdev4", 00:32:33.769 "uuid": "e55156a8-ac91-4383-8b7e-dd486c11c288", 00:32:33.769 "is_configured": true, 00:32:33.769 "data_offset": 2048, 00:32:33.769 "data_size": 63488 00:32:33.769 } 00:32:33.769 ] 00:32:33.769 } 00:32:33.769 } 00:32:33.769 }' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:33.769 BaseBdev2 00:32:33.769 BaseBdev3 00:32:33.769 BaseBdev4' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.769 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.028 17:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 [2024-11-26 17:29:03.939624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.028 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.028 "name": "Existed_Raid", 00:32:34.028 "uuid": "36ab5f36-0d04-4707-956c-37e020afd7ec", 00:32:34.028 "strip_size_kb": 64, 00:32:34.028 "state": "online", 00:32:34.028 "raid_level": "raid5f", 00:32:34.028 "superblock": true, 00:32:34.028 "num_base_bdevs": 4, 00:32:34.028 "num_base_bdevs_discovered": 3, 00:32:34.028 "num_base_bdevs_operational": 3, 00:32:34.028 "base_bdevs_list": [ 00:32:34.028 { 00:32:34.028 "name": null, 00:32:34.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.029 "is_configured": false, 00:32:34.029 "data_offset": 0, 00:32:34.029 "data_size": 63488 00:32:34.029 }, 00:32:34.029 { 00:32:34.029 "name": "BaseBdev2", 00:32:34.029 "uuid": "a3273838-22a2-4b09-aa34-b9b14d0497ee", 00:32:34.029 "is_configured": true, 00:32:34.029 "data_offset": 2048, 00:32:34.029 "data_size": 63488 00:32:34.029 }, 00:32:34.029 { 00:32:34.029 "name": "BaseBdev3", 00:32:34.029 "uuid": "badb38d6-8bdd-43fa-8dac-d8381a50f4b2", 00:32:34.029 "is_configured": true, 00:32:34.029 "data_offset": 2048, 00:32:34.029 "data_size": 63488 00:32:34.029 }, 00:32:34.029 { 00:32:34.029 "name": "BaseBdev4", 00:32:34.029 "uuid": "e55156a8-ac91-4383-8b7e-dd486c11c288", 00:32:34.029 "is_configured": true, 00:32:34.029 "data_offset": 2048, 00:32:34.029 "data_size": 63488 00:32:34.029 } 00:32:34.029 ] 00:32:34.029 }' 00:32:34.029 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.029 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.597 [2024-11-26 17:29:04.555481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:34.597 [2024-11-26 17:29:04.555697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:34.597 [2024-11-26 17:29:04.654663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.597 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.857 [2024-11-26 17:29:04.718637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.857 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.857 [2024-11-26 17:29:04.888626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:34.857 [2024-11-26 17:29:04.888692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.117 17:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.117 BaseBdev2 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.117 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.118 [ 00:32:35.118 { 00:32:35.118 "name": "BaseBdev2", 00:32:35.118 "aliases": [ 00:32:35.118 "89875c9e-e668-4ce4-b031-f525d080f7ad" 00:32:35.118 ], 00:32:35.118 "product_name": "Malloc disk", 00:32:35.118 "block_size": 512, 00:32:35.118 "num_blocks": 65536, 00:32:35.118 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:35.118 "assigned_rate_limits": { 00:32:35.118 "rw_ios_per_sec": 0, 00:32:35.118 "rw_mbytes_per_sec": 0, 00:32:35.118 "r_mbytes_per_sec": 0, 00:32:35.118 "w_mbytes_per_sec": 0 00:32:35.118 }, 00:32:35.118 "claimed": false, 00:32:35.118 "zoned": false, 00:32:35.118 "supported_io_types": { 00:32:35.118 "read": true, 00:32:35.118 "write": true, 00:32:35.118 "unmap": true, 00:32:35.118 "flush": true, 00:32:35.118 "reset": true, 00:32:35.118 "nvme_admin": false, 00:32:35.118 "nvme_io": false, 00:32:35.118 "nvme_io_md": false, 00:32:35.118 "write_zeroes": true, 00:32:35.118 "zcopy": true, 00:32:35.118 "get_zone_info": false, 00:32:35.118 "zone_management": false, 00:32:35.118 "zone_append": false, 00:32:35.118 "compare": false, 00:32:35.118 "compare_and_write": false, 00:32:35.118 "abort": true, 00:32:35.118 "seek_hole": false, 00:32:35.118 "seek_data": false, 00:32:35.118 "copy": true, 00:32:35.118 "nvme_iov_md": false 00:32:35.118 }, 00:32:35.118 "memory_domains": [ 00:32:35.118 { 00:32:35.118 "dma_device_id": "system", 00:32:35.118 "dma_device_type": 1 00:32:35.118 }, 00:32:35.118 { 00:32:35.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.118 "dma_device_type": 2 00:32:35.118 } 00:32:35.118 ], 00:32:35.118 "driver_specific": {} 00:32:35.118 } 00:32:35.118 ] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.118 BaseBdev3 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.118 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.118 [ 00:32:35.118 { 00:32:35.118 "name": "BaseBdev3", 00:32:35.118 "aliases": [ 00:32:35.118 "490e2312-1444-4457-b482-9030cdeb8a6b" 00:32:35.118 ], 00:32:35.118 "product_name": "Malloc disk", 00:32:35.118 "block_size": 512, 00:32:35.118 "num_blocks": 65536, 00:32:35.118 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:35.118 "assigned_rate_limits": { 00:32:35.118 "rw_ios_per_sec": 0, 00:32:35.118 "rw_mbytes_per_sec": 0, 00:32:35.118 "r_mbytes_per_sec": 0, 00:32:35.118 "w_mbytes_per_sec": 0 00:32:35.118 }, 00:32:35.118 "claimed": false, 00:32:35.118 "zoned": false, 00:32:35.118 "supported_io_types": { 00:32:35.118 "read": true, 00:32:35.118 "write": true, 00:32:35.118 "unmap": true, 00:32:35.118 "flush": true, 00:32:35.118 "reset": true, 00:32:35.118 "nvme_admin": false, 00:32:35.118 "nvme_io": false, 00:32:35.118 "nvme_io_md": false, 00:32:35.378 "write_zeroes": true, 00:32:35.378 "zcopy": true, 00:32:35.378 "get_zone_info": false, 00:32:35.378 "zone_management": false, 00:32:35.378 "zone_append": false, 00:32:35.378 "compare": false, 00:32:35.378 "compare_and_write": false, 00:32:35.378 "abort": true, 00:32:35.378 "seek_hole": false, 00:32:35.378 "seek_data": false, 00:32:35.378 "copy": true, 00:32:35.378 "nvme_iov_md": false 00:32:35.378 }, 00:32:35.378 "memory_domains": [ 00:32:35.378 { 00:32:35.378 "dma_device_id": "system", 00:32:35.378 "dma_device_type": 1 00:32:35.378 }, 00:32:35.378 { 00:32:35.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.378 "dma_device_type": 2 00:32:35.378 } 00:32:35.378 ], 00:32:35.378 "driver_specific": {} 00:32:35.378 } 00:32:35.378 ] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.378 BaseBdev4 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.378 [ 00:32:35.378 { 00:32:35.378 "name": "BaseBdev4", 00:32:35.378 "aliases": [ 00:32:35.378 "b5808b9f-95c9-47cd-9984-febda31a6d19" 00:32:35.378 ], 00:32:35.378 "product_name": "Malloc disk", 00:32:35.378 "block_size": 512, 00:32:35.378 "num_blocks": 65536, 00:32:35.378 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:35.378 "assigned_rate_limits": { 00:32:35.378 "rw_ios_per_sec": 0, 00:32:35.378 "rw_mbytes_per_sec": 0, 00:32:35.378 "r_mbytes_per_sec": 0, 00:32:35.378 "w_mbytes_per_sec": 0 00:32:35.378 }, 00:32:35.378 "claimed": false, 00:32:35.378 "zoned": false, 00:32:35.378 "supported_io_types": { 00:32:35.378 "read": true, 00:32:35.378 "write": true, 00:32:35.378 "unmap": true, 00:32:35.378 "flush": true, 00:32:35.378 "reset": true, 00:32:35.378 "nvme_admin": false, 00:32:35.378 "nvme_io": false, 00:32:35.378 "nvme_io_md": false, 00:32:35.378 "write_zeroes": true, 00:32:35.378 "zcopy": true, 00:32:35.378 "get_zone_info": false, 00:32:35.378 "zone_management": false, 00:32:35.378 "zone_append": false, 00:32:35.378 "compare": false, 00:32:35.378 "compare_and_write": false, 00:32:35.378 "abort": true, 00:32:35.378 "seek_hole": false, 00:32:35.378 "seek_data": false, 00:32:35.378 "copy": true, 00:32:35.378 "nvme_iov_md": false 00:32:35.378 }, 00:32:35.378 "memory_domains": [ 00:32:35.378 { 00:32:35.378 "dma_device_id": "system", 00:32:35.378 "dma_device_type": 1 00:32:35.378 }, 00:32:35.378 { 00:32:35.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.378 "dma_device_type": 2 00:32:35.378 } 00:32:35.378 ], 00:32:35.378 "driver_specific": {} 00:32:35.378 } 00:32:35.378 ] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.378 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.378 [2024-11-26 17:29:05.342596] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:35.379 [2024-11-26 17:29:05.342784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:35.379 [2024-11-26 17:29:05.342842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:35.379 [2024-11-26 17:29:05.345281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:35.379 [2024-11-26 17:29:05.345344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.379 "name": "Existed_Raid", 00:32:35.379 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:35.379 "strip_size_kb": 64, 00:32:35.379 "state": "configuring", 00:32:35.379 "raid_level": "raid5f", 00:32:35.379 "superblock": true, 00:32:35.379 "num_base_bdevs": 4, 00:32:35.379 "num_base_bdevs_discovered": 3, 00:32:35.379 "num_base_bdevs_operational": 4, 00:32:35.379 "base_bdevs_list": [ 00:32:35.379 { 00:32:35.379 "name": "BaseBdev1", 00:32:35.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.379 "is_configured": false, 00:32:35.379 "data_offset": 0, 00:32:35.379 "data_size": 0 00:32:35.379 }, 00:32:35.379 { 00:32:35.379 "name": "BaseBdev2", 00:32:35.379 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:35.379 "is_configured": true, 00:32:35.379 "data_offset": 2048, 00:32:35.379 "data_size": 63488 00:32:35.379 }, 00:32:35.379 { 00:32:35.379 "name": "BaseBdev3", 00:32:35.379 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:35.379 "is_configured": true, 00:32:35.379 "data_offset": 2048, 00:32:35.379 "data_size": 63488 00:32:35.379 }, 00:32:35.379 { 00:32:35.379 "name": "BaseBdev4", 00:32:35.379 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:35.379 "is_configured": true, 00:32:35.379 "data_offset": 2048, 00:32:35.379 "data_size": 63488 00:32:35.379 } 00:32:35.379 ] 00:32:35.379 }' 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.379 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.948 [2024-11-26 17:29:05.805897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.948 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.948 "name": "Existed_Raid", 00:32:35.948 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:35.948 "strip_size_kb": 64, 00:32:35.948 "state": "configuring", 00:32:35.948 "raid_level": "raid5f", 00:32:35.948 "superblock": true, 00:32:35.948 "num_base_bdevs": 4, 00:32:35.948 "num_base_bdevs_discovered": 2, 00:32:35.948 "num_base_bdevs_operational": 4, 00:32:35.948 "base_bdevs_list": [ 00:32:35.948 { 00:32:35.948 "name": "BaseBdev1", 00:32:35.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.948 "is_configured": false, 00:32:35.948 "data_offset": 0, 00:32:35.948 "data_size": 0 00:32:35.948 }, 00:32:35.948 { 00:32:35.948 "name": null, 00:32:35.948 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:35.949 "is_configured": false, 00:32:35.949 "data_offset": 0, 00:32:35.949 "data_size": 63488 00:32:35.949 }, 00:32:35.949 { 00:32:35.949 "name": "BaseBdev3", 00:32:35.949 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:35.949 "is_configured": true, 00:32:35.949 "data_offset": 2048, 00:32:35.949 "data_size": 63488 00:32:35.949 }, 00:32:35.949 { 00:32:35.949 "name": "BaseBdev4", 00:32:35.949 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:35.949 "is_configured": true, 00:32:35.949 "data_offset": 2048, 00:32:35.949 "data_size": 63488 00:32:35.949 } 00:32:35.949 ] 00:32:35.949 }' 00:32:35.949 17:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.949 17:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.208 [2024-11-26 17:29:06.286161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:36.208 BaseBdev1 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.208 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.208 [ 00:32:36.208 { 00:32:36.208 "name": "BaseBdev1", 00:32:36.208 "aliases": [ 00:32:36.208 "319d5333-1132-4880-971e-3f4b2df00178" 00:32:36.208 ], 00:32:36.208 "product_name": "Malloc disk", 00:32:36.208 "block_size": 512, 00:32:36.208 "num_blocks": 65536, 00:32:36.208 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:36.208 "assigned_rate_limits": { 00:32:36.208 "rw_ios_per_sec": 0, 00:32:36.208 "rw_mbytes_per_sec": 0, 00:32:36.208 "r_mbytes_per_sec": 0, 00:32:36.208 "w_mbytes_per_sec": 0 00:32:36.208 }, 00:32:36.208 "claimed": true, 00:32:36.208 "claim_type": "exclusive_write", 00:32:36.467 "zoned": false, 00:32:36.467 "supported_io_types": { 00:32:36.467 "read": true, 00:32:36.467 "write": true, 00:32:36.467 "unmap": true, 00:32:36.467 "flush": true, 00:32:36.467 "reset": true, 00:32:36.467 "nvme_admin": false, 00:32:36.467 "nvme_io": false, 00:32:36.467 "nvme_io_md": false, 00:32:36.467 "write_zeroes": true, 00:32:36.467 "zcopy": true, 00:32:36.467 "get_zone_info": false, 00:32:36.467 "zone_management": false, 00:32:36.468 "zone_append": false, 00:32:36.468 "compare": false, 00:32:36.468 "compare_and_write": false, 00:32:36.468 "abort": true, 00:32:36.468 "seek_hole": false, 00:32:36.468 "seek_data": false, 00:32:36.468 "copy": true, 00:32:36.468 "nvme_iov_md": false 00:32:36.468 }, 00:32:36.468 "memory_domains": [ 00:32:36.468 { 00:32:36.468 "dma_device_id": "system", 00:32:36.468 "dma_device_type": 1 00:32:36.468 }, 00:32:36.468 { 00:32:36.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.468 "dma_device_type": 2 00:32:36.468 } 00:32:36.468 ], 00:32:36.468 "driver_specific": {} 00:32:36.468 } 00:32:36.468 ] 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.468 "name": "Existed_Raid", 00:32:36.468 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:36.468 "strip_size_kb": 64, 00:32:36.468 "state": "configuring", 00:32:36.468 "raid_level": "raid5f", 00:32:36.468 "superblock": true, 00:32:36.468 "num_base_bdevs": 4, 00:32:36.468 "num_base_bdevs_discovered": 3, 00:32:36.468 "num_base_bdevs_operational": 4, 00:32:36.468 "base_bdevs_list": [ 00:32:36.468 { 00:32:36.468 "name": "BaseBdev1", 00:32:36.468 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:36.468 "is_configured": true, 00:32:36.468 "data_offset": 2048, 00:32:36.468 "data_size": 63488 00:32:36.468 }, 00:32:36.468 { 00:32:36.468 "name": null, 00:32:36.468 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:36.468 "is_configured": false, 00:32:36.468 "data_offset": 0, 00:32:36.468 "data_size": 63488 00:32:36.468 }, 00:32:36.468 { 00:32:36.468 "name": "BaseBdev3", 00:32:36.468 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:36.468 "is_configured": true, 00:32:36.468 "data_offset": 2048, 00:32:36.468 "data_size": 63488 00:32:36.468 }, 00:32:36.468 { 00:32:36.468 "name": "BaseBdev4", 00:32:36.468 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:36.468 "is_configured": true, 00:32:36.468 "data_offset": 2048, 00:32:36.468 "data_size": 63488 00:32:36.468 } 00:32:36.468 ] 00:32:36.468 }' 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.468 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.725 [2024-11-26 17:29:06.825577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.725 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:36.726 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.726 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.726 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.726 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.984 "name": "Existed_Raid", 00:32:36.984 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:36.984 "strip_size_kb": 64, 00:32:36.984 "state": "configuring", 00:32:36.984 "raid_level": "raid5f", 00:32:36.984 "superblock": true, 00:32:36.984 "num_base_bdevs": 4, 00:32:36.984 "num_base_bdevs_discovered": 2, 00:32:36.984 "num_base_bdevs_operational": 4, 00:32:36.984 "base_bdevs_list": [ 00:32:36.984 { 00:32:36.984 "name": "BaseBdev1", 00:32:36.984 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:36.984 "is_configured": true, 00:32:36.984 "data_offset": 2048, 00:32:36.984 "data_size": 63488 00:32:36.984 }, 00:32:36.984 { 00:32:36.984 "name": null, 00:32:36.984 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:36.984 "is_configured": false, 00:32:36.984 "data_offset": 0, 00:32:36.984 "data_size": 63488 00:32:36.984 }, 00:32:36.984 { 00:32:36.984 "name": null, 00:32:36.984 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:36.984 "is_configured": false, 00:32:36.984 "data_offset": 0, 00:32:36.984 "data_size": 63488 00:32:36.984 }, 00:32:36.984 { 00:32:36.984 "name": "BaseBdev4", 00:32:36.984 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:36.984 "is_configured": true, 00:32:36.984 "data_offset": 2048, 00:32:36.984 "data_size": 63488 00:32:36.984 } 00:32:36.984 ] 00:32:36.984 }' 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.984 17:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.242 [2024-11-26 17:29:07.292898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.242 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.501 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.501 "name": "Existed_Raid", 00:32:37.501 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:37.501 "strip_size_kb": 64, 00:32:37.501 "state": "configuring", 00:32:37.501 "raid_level": "raid5f", 00:32:37.501 "superblock": true, 00:32:37.501 "num_base_bdevs": 4, 00:32:37.501 "num_base_bdevs_discovered": 3, 00:32:37.501 "num_base_bdevs_operational": 4, 00:32:37.502 "base_bdevs_list": [ 00:32:37.502 { 00:32:37.502 "name": "BaseBdev1", 00:32:37.502 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:37.502 "is_configured": true, 00:32:37.502 "data_offset": 2048, 00:32:37.502 "data_size": 63488 00:32:37.502 }, 00:32:37.502 { 00:32:37.502 "name": null, 00:32:37.502 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:37.502 "is_configured": false, 00:32:37.502 "data_offset": 0, 00:32:37.502 "data_size": 63488 00:32:37.502 }, 00:32:37.502 { 00:32:37.502 "name": "BaseBdev3", 00:32:37.502 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:37.502 "is_configured": true, 00:32:37.502 "data_offset": 2048, 00:32:37.502 "data_size": 63488 00:32:37.502 }, 00:32:37.502 { 00:32:37.502 "name": "BaseBdev4", 00:32:37.502 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:37.502 "is_configured": true, 00:32:37.502 "data_offset": 2048, 00:32:37.502 "data_size": 63488 00:32:37.502 } 00:32:37.502 ] 00:32:37.502 }' 00:32:37.502 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.502 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.760 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.760 [2024-11-26 17:29:07.836235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.077 "name": "Existed_Raid", 00:32:38.077 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:38.077 "strip_size_kb": 64, 00:32:38.077 "state": "configuring", 00:32:38.077 "raid_level": "raid5f", 00:32:38.077 "superblock": true, 00:32:38.077 "num_base_bdevs": 4, 00:32:38.077 "num_base_bdevs_discovered": 2, 00:32:38.077 "num_base_bdevs_operational": 4, 00:32:38.077 "base_bdevs_list": [ 00:32:38.077 { 00:32:38.077 "name": null, 00:32:38.077 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:38.077 "is_configured": false, 00:32:38.077 "data_offset": 0, 00:32:38.077 "data_size": 63488 00:32:38.077 }, 00:32:38.077 { 00:32:38.077 "name": null, 00:32:38.077 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:38.077 "is_configured": false, 00:32:38.077 "data_offset": 0, 00:32:38.077 "data_size": 63488 00:32:38.077 }, 00:32:38.077 { 00:32:38.077 "name": "BaseBdev3", 00:32:38.077 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:38.077 "is_configured": true, 00:32:38.077 "data_offset": 2048, 00:32:38.077 "data_size": 63488 00:32:38.077 }, 00:32:38.077 { 00:32:38.077 "name": "BaseBdev4", 00:32:38.077 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:38.077 "is_configured": true, 00:32:38.077 "data_offset": 2048, 00:32:38.077 "data_size": 63488 00:32:38.077 } 00:32:38.077 ] 00:32:38.077 }' 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.077 17:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.351 [2024-11-26 17:29:08.436510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.351 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.678 "name": "Existed_Raid", 00:32:38.678 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:38.678 "strip_size_kb": 64, 00:32:38.678 "state": "configuring", 00:32:38.678 "raid_level": "raid5f", 00:32:38.678 "superblock": true, 00:32:38.678 "num_base_bdevs": 4, 00:32:38.678 "num_base_bdevs_discovered": 3, 00:32:38.678 "num_base_bdevs_operational": 4, 00:32:38.678 "base_bdevs_list": [ 00:32:38.678 { 00:32:38.678 "name": null, 00:32:38.678 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:38.678 "is_configured": false, 00:32:38.678 "data_offset": 0, 00:32:38.678 "data_size": 63488 00:32:38.678 }, 00:32:38.678 { 00:32:38.678 "name": "BaseBdev2", 00:32:38.678 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:38.678 "is_configured": true, 00:32:38.678 "data_offset": 2048, 00:32:38.678 "data_size": 63488 00:32:38.678 }, 00:32:38.678 { 00:32:38.678 "name": "BaseBdev3", 00:32:38.678 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:38.678 "is_configured": true, 00:32:38.678 "data_offset": 2048, 00:32:38.678 "data_size": 63488 00:32:38.678 }, 00:32:38.678 { 00:32:38.678 "name": "BaseBdev4", 00:32:38.678 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:38.678 "is_configured": true, 00:32:38.678 "data_offset": 2048, 00:32:38.678 "data_size": 63488 00:32:38.678 } 00:32:38.678 ] 00:32:38.678 }' 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.678 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.937 17:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.937 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 319d5333-1132-4880-971e-3f4b2df00178 00:32:38.937 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.937 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.937 [2024-11-26 17:29:09.047980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:38.937 [2024-11-26 17:29:09.048266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:38.937 [2024-11-26 17:29:09.048282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:38.937 [2024-11-26 17:29:09.048605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:38.937 NewBaseBdev 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.195 [2024-11-26 17:29:09.055899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:39.195 [2024-11-26 17:29:09.056125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:32:39.195 [2024-11-26 17:29:09.056562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.195 [ 00:32:39.195 { 00:32:39.195 "name": "NewBaseBdev", 00:32:39.195 "aliases": [ 00:32:39.195 "319d5333-1132-4880-971e-3f4b2df00178" 00:32:39.195 ], 00:32:39.195 "product_name": "Malloc disk", 00:32:39.195 "block_size": 512, 00:32:39.195 "num_blocks": 65536, 00:32:39.195 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:39.195 "assigned_rate_limits": { 00:32:39.195 "rw_ios_per_sec": 0, 00:32:39.195 "rw_mbytes_per_sec": 0, 00:32:39.195 "r_mbytes_per_sec": 0, 00:32:39.195 "w_mbytes_per_sec": 0 00:32:39.195 }, 00:32:39.195 "claimed": true, 00:32:39.195 "claim_type": "exclusive_write", 00:32:39.195 "zoned": false, 00:32:39.195 "supported_io_types": { 00:32:39.195 "read": true, 00:32:39.195 "write": true, 00:32:39.195 "unmap": true, 00:32:39.195 "flush": true, 00:32:39.195 "reset": true, 00:32:39.195 "nvme_admin": false, 00:32:39.195 "nvme_io": false, 00:32:39.195 "nvme_io_md": false, 00:32:39.195 "write_zeroes": true, 00:32:39.195 "zcopy": true, 00:32:39.195 "get_zone_info": false, 00:32:39.195 "zone_management": false, 00:32:39.195 "zone_append": false, 00:32:39.195 "compare": false, 00:32:39.195 "compare_and_write": false, 00:32:39.195 "abort": true, 00:32:39.195 "seek_hole": false, 00:32:39.195 "seek_data": false, 00:32:39.195 "copy": true, 00:32:39.195 "nvme_iov_md": false 00:32:39.195 }, 00:32:39.195 "memory_domains": [ 00:32:39.195 { 00:32:39.195 "dma_device_id": "system", 00:32:39.195 "dma_device_type": 1 00:32:39.195 }, 00:32:39.195 { 00:32:39.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.195 "dma_device_type": 2 00:32:39.195 } 00:32:39.195 ], 00:32:39.195 "driver_specific": {} 00:32:39.195 } 00:32:39.195 ] 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.195 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.196 "name": "Existed_Raid", 00:32:39.196 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:39.196 "strip_size_kb": 64, 00:32:39.196 "state": "online", 00:32:39.196 "raid_level": "raid5f", 00:32:39.196 "superblock": true, 00:32:39.196 "num_base_bdevs": 4, 00:32:39.196 "num_base_bdevs_discovered": 4, 00:32:39.196 "num_base_bdevs_operational": 4, 00:32:39.196 "base_bdevs_list": [ 00:32:39.196 { 00:32:39.196 "name": "NewBaseBdev", 00:32:39.196 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:39.196 "is_configured": true, 00:32:39.196 "data_offset": 2048, 00:32:39.196 "data_size": 63488 00:32:39.196 }, 00:32:39.196 { 00:32:39.196 "name": "BaseBdev2", 00:32:39.196 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:39.196 "is_configured": true, 00:32:39.196 "data_offset": 2048, 00:32:39.196 "data_size": 63488 00:32:39.196 }, 00:32:39.196 { 00:32:39.196 "name": "BaseBdev3", 00:32:39.196 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:39.196 "is_configured": true, 00:32:39.196 "data_offset": 2048, 00:32:39.196 "data_size": 63488 00:32:39.196 }, 00:32:39.196 { 00:32:39.196 "name": "BaseBdev4", 00:32:39.196 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:39.196 "is_configured": true, 00:32:39.196 "data_offset": 2048, 00:32:39.196 "data_size": 63488 00:32:39.196 } 00:32:39.196 ] 00:32:39.196 }' 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.196 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.453 [2024-11-26 17:29:09.532977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:39.453 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:39.711 "name": "Existed_Raid", 00:32:39.711 "aliases": [ 00:32:39.711 "9ca8e989-7634-4358-a201-b51ebd0f294e" 00:32:39.711 ], 00:32:39.711 "product_name": "Raid Volume", 00:32:39.711 "block_size": 512, 00:32:39.711 "num_blocks": 190464, 00:32:39.711 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:39.711 "assigned_rate_limits": { 00:32:39.711 "rw_ios_per_sec": 0, 00:32:39.711 "rw_mbytes_per_sec": 0, 00:32:39.711 "r_mbytes_per_sec": 0, 00:32:39.711 "w_mbytes_per_sec": 0 00:32:39.711 }, 00:32:39.711 "claimed": false, 00:32:39.711 "zoned": false, 00:32:39.711 "supported_io_types": { 00:32:39.711 "read": true, 00:32:39.711 "write": true, 00:32:39.711 "unmap": false, 00:32:39.711 "flush": false, 00:32:39.711 "reset": true, 00:32:39.711 "nvme_admin": false, 00:32:39.711 "nvme_io": false, 00:32:39.711 "nvme_io_md": false, 00:32:39.711 "write_zeroes": true, 00:32:39.711 "zcopy": false, 00:32:39.711 "get_zone_info": false, 00:32:39.711 "zone_management": false, 00:32:39.711 "zone_append": false, 00:32:39.711 "compare": false, 00:32:39.711 "compare_and_write": false, 00:32:39.711 "abort": false, 00:32:39.711 "seek_hole": false, 00:32:39.711 "seek_data": false, 00:32:39.711 "copy": false, 00:32:39.711 "nvme_iov_md": false 00:32:39.711 }, 00:32:39.711 "driver_specific": { 00:32:39.711 "raid": { 00:32:39.711 "uuid": "9ca8e989-7634-4358-a201-b51ebd0f294e", 00:32:39.711 "strip_size_kb": 64, 00:32:39.711 "state": "online", 00:32:39.711 "raid_level": "raid5f", 00:32:39.711 "superblock": true, 00:32:39.711 "num_base_bdevs": 4, 00:32:39.711 "num_base_bdevs_discovered": 4, 00:32:39.711 "num_base_bdevs_operational": 4, 00:32:39.711 "base_bdevs_list": [ 00:32:39.711 { 00:32:39.711 "name": "NewBaseBdev", 00:32:39.711 "uuid": "319d5333-1132-4880-971e-3f4b2df00178", 00:32:39.711 "is_configured": true, 00:32:39.711 "data_offset": 2048, 00:32:39.711 "data_size": 63488 00:32:39.711 }, 00:32:39.711 { 00:32:39.711 "name": "BaseBdev2", 00:32:39.711 "uuid": "89875c9e-e668-4ce4-b031-f525d080f7ad", 00:32:39.711 "is_configured": true, 00:32:39.711 "data_offset": 2048, 00:32:39.711 "data_size": 63488 00:32:39.711 }, 00:32:39.711 { 00:32:39.711 "name": "BaseBdev3", 00:32:39.711 "uuid": "490e2312-1444-4457-b482-9030cdeb8a6b", 00:32:39.711 "is_configured": true, 00:32:39.711 "data_offset": 2048, 00:32:39.711 "data_size": 63488 00:32:39.711 }, 00:32:39.711 { 00:32:39.711 "name": "BaseBdev4", 00:32:39.711 "uuid": "b5808b9f-95c9-47cd-9984-febda31a6d19", 00:32:39.711 "is_configured": true, 00:32:39.711 "data_offset": 2048, 00:32:39.711 "data_size": 63488 00:32:39.711 } 00:32:39.711 ] 00:32:39.711 } 00:32:39.711 } 00:32:39.711 }' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:39.711 BaseBdev2 00:32:39.711 BaseBdev3 00:32:39.711 BaseBdev4' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.711 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.970 [2024-11-26 17:29:09.876275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:39.970 [2024-11-26 17:29:09.876328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:39.970 [2024-11-26 17:29:09.876431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:39.970 [2024-11-26 17:29:09.876774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:39.970 [2024-11-26 17:29:09.876792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83610 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83610 ']' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83610 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83610 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.970 killing process with pid 83610 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83610' 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83610 00:32:39.970 [2024-11-26 17:29:09.928948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:39.970 17:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83610 00:32:40.535 [2024-11-26 17:29:10.365089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:41.910 17:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:41.910 00:32:41.910 real 0m11.944s 00:32:41.910 user 0m18.698s 00:32:41.910 sys 0m2.590s 00:32:41.910 17:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.910 ************************************ 00:32:41.910 END TEST raid5f_state_function_test_sb 00:32:41.910 ************************************ 00:32:41.910 17:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.910 17:29:11 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:32:41.910 17:29:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:41.910 17:29:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.910 17:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:41.910 ************************************ 00:32:41.910 START TEST raid5f_superblock_test 00:32:41.910 ************************************ 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84275 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84275 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84275 ']' 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.910 17:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.910 [2024-11-26 17:29:11.811805] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:32:41.910 [2024-11-26 17:29:11.811945] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84275 ] 00:32:41.910 [2024-11-26 17:29:12.007098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.170 [2024-11-26 17:29:12.165228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.437 [2024-11-26 17:29:12.394604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:42.437 [2024-11-26 17:29:12.394678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.717 malloc1 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.717 [2024-11-26 17:29:12.710181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:42.717 [2024-11-26 17:29:12.710261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.717 [2024-11-26 17:29:12.710287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:42.717 [2024-11-26 17:29:12.710300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.717 [2024-11-26 17:29:12.712878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.717 [2024-11-26 17:29:12.712920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:42.717 pt1 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.717 malloc2 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.717 [2024-11-26 17:29:12.769215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:42.717 [2024-11-26 17:29:12.769291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.717 [2024-11-26 17:29:12.769324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:42.717 [2024-11-26 17:29:12.769337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.717 [2024-11-26 17:29:12.772036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.717 [2024-11-26 17:29:12.772074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:42.717 pt2 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:42.717 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.718 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 malloc3 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 [2024-11-26 17:29:12.840403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:42.977 [2024-11-26 17:29:12.840462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.977 [2024-11-26 17:29:12.840486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:42.977 [2024-11-26 17:29:12.840498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.977 [2024-11-26 17:29:12.843036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.977 [2024-11-26 17:29:12.843074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:42.977 pt3 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 malloc4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 [2024-11-26 17:29:12.897995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:42.977 [2024-11-26 17:29:12.898070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.977 [2024-11-26 17:29:12.898094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:42.977 [2024-11-26 17:29:12.898107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.977 [2024-11-26 17:29:12.900685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.977 [2024-11-26 17:29:12.900719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:42.977 pt4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 [2024-11-26 17:29:12.910010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:42.977 [2024-11-26 17:29:12.912230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:42.977 [2024-11-26 17:29:12.912325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:42.977 [2024-11-26 17:29:12.912372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:42.977 [2024-11-26 17:29:12.912613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:42.977 [2024-11-26 17:29:12.912639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:42.977 [2024-11-26 17:29:12.912912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:42.977 [2024-11-26 17:29:12.920610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:42.977 [2024-11-26 17:29:12.920639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:42.977 [2024-11-26 17:29:12.920842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.977 "name": "raid_bdev1", 00:32:42.977 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:42.977 "strip_size_kb": 64, 00:32:42.977 "state": "online", 00:32:42.977 "raid_level": "raid5f", 00:32:42.977 "superblock": true, 00:32:42.977 "num_base_bdevs": 4, 00:32:42.977 "num_base_bdevs_discovered": 4, 00:32:42.977 "num_base_bdevs_operational": 4, 00:32:42.977 "base_bdevs_list": [ 00:32:42.977 { 00:32:42.977 "name": "pt1", 00:32:42.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:42.977 "is_configured": true, 00:32:42.977 "data_offset": 2048, 00:32:42.977 "data_size": 63488 00:32:42.977 }, 00:32:42.977 { 00:32:42.977 "name": "pt2", 00:32:42.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:42.977 "is_configured": true, 00:32:42.977 "data_offset": 2048, 00:32:42.977 "data_size": 63488 00:32:42.977 }, 00:32:42.977 { 00:32:42.977 "name": "pt3", 00:32:42.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:42.977 "is_configured": true, 00:32:42.977 "data_offset": 2048, 00:32:42.977 "data_size": 63488 00:32:42.977 }, 00:32:42.977 { 00:32:42.977 "name": "pt4", 00:32:42.977 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:42.977 "is_configured": true, 00:32:42.977 "data_offset": 2048, 00:32:42.977 "data_size": 63488 00:32:42.977 } 00:32:42.977 ] 00:32:42.977 }' 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.977 17:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:43.546 [2024-11-26 17:29:13.369161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:43.546 "name": "raid_bdev1", 00:32:43.546 "aliases": [ 00:32:43.546 "97d4ed85-b2f1-42c6-978b-7e98b34d3215" 00:32:43.546 ], 00:32:43.546 "product_name": "Raid Volume", 00:32:43.546 "block_size": 512, 00:32:43.546 "num_blocks": 190464, 00:32:43.546 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:43.546 "assigned_rate_limits": { 00:32:43.546 "rw_ios_per_sec": 0, 00:32:43.546 "rw_mbytes_per_sec": 0, 00:32:43.546 "r_mbytes_per_sec": 0, 00:32:43.546 "w_mbytes_per_sec": 0 00:32:43.546 }, 00:32:43.546 "claimed": false, 00:32:43.546 "zoned": false, 00:32:43.546 "supported_io_types": { 00:32:43.546 "read": true, 00:32:43.546 "write": true, 00:32:43.546 "unmap": false, 00:32:43.546 "flush": false, 00:32:43.546 "reset": true, 00:32:43.546 "nvme_admin": false, 00:32:43.546 "nvme_io": false, 00:32:43.546 "nvme_io_md": false, 00:32:43.546 "write_zeroes": true, 00:32:43.546 "zcopy": false, 00:32:43.546 "get_zone_info": false, 00:32:43.546 "zone_management": false, 00:32:43.546 "zone_append": false, 00:32:43.546 "compare": false, 00:32:43.546 "compare_and_write": false, 00:32:43.546 "abort": false, 00:32:43.546 "seek_hole": false, 00:32:43.546 "seek_data": false, 00:32:43.546 "copy": false, 00:32:43.546 "nvme_iov_md": false 00:32:43.546 }, 00:32:43.546 "driver_specific": { 00:32:43.546 "raid": { 00:32:43.546 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:43.546 "strip_size_kb": 64, 00:32:43.546 "state": "online", 00:32:43.546 "raid_level": "raid5f", 00:32:43.546 "superblock": true, 00:32:43.546 "num_base_bdevs": 4, 00:32:43.546 "num_base_bdevs_discovered": 4, 00:32:43.546 "num_base_bdevs_operational": 4, 00:32:43.546 "base_bdevs_list": [ 00:32:43.546 { 00:32:43.546 "name": "pt1", 00:32:43.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:43.546 "is_configured": true, 00:32:43.546 "data_offset": 2048, 00:32:43.546 "data_size": 63488 00:32:43.546 }, 00:32:43.546 { 00:32:43.546 "name": "pt2", 00:32:43.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:43.546 "is_configured": true, 00:32:43.546 "data_offset": 2048, 00:32:43.546 "data_size": 63488 00:32:43.546 }, 00:32:43.546 { 00:32:43.546 "name": "pt3", 00:32:43.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:43.546 "is_configured": true, 00:32:43.546 "data_offset": 2048, 00:32:43.546 "data_size": 63488 00:32:43.546 }, 00:32:43.546 { 00:32:43.546 "name": "pt4", 00:32:43.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:43.546 "is_configured": true, 00:32:43.546 "data_offset": 2048, 00:32:43.546 "data_size": 63488 00:32:43.546 } 00:32:43.546 ] 00:32:43.546 } 00:32:43.546 } 00:32:43.546 }' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:43.546 pt2 00:32:43.546 pt3 00:32:43.546 pt4' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:43.546 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 [2024-11-26 17:29:13.668850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=97d4ed85-b2f1-42c6-978b-7e98b34d3215 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 97d4ed85-b2f1-42c6-978b-7e98b34d3215 ']' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 [2024-11-26 17:29:13.712684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:43.806 [2024-11-26 17:29:13.712729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:43.806 [2024-11-26 17:29:13.712838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:43.806 [2024-11-26 17:29:13.712930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:43.806 [2024-11-26 17:29:13.712951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.806 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 [2024-11-26 17:29:13.880460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:43.806 [2024-11-26 17:29:13.882896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:43.806 [2024-11-26 17:29:13.882972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:43.806 [2024-11-26 17:29:13.883009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:43.806 [2024-11-26 17:29:13.883066] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:43.806 [2024-11-26 17:29:13.883125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:43.806 [2024-11-26 17:29:13.883149] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:43.807 [2024-11-26 17:29:13.883171] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:43.807 [2024-11-26 17:29:13.883188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:43.807 [2024-11-26 17:29:13.883201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:43.807 request: 00:32:43.807 { 00:32:43.807 "name": "raid_bdev1", 00:32:43.807 "raid_level": "raid5f", 00:32:43.807 "base_bdevs": [ 00:32:43.807 "malloc1", 00:32:43.807 "malloc2", 00:32:43.807 "malloc3", 00:32:43.807 "malloc4" 00:32:43.807 ], 00:32:43.807 "strip_size_kb": 64, 00:32:43.807 "superblock": false, 00:32:43.807 "method": "bdev_raid_create", 00:32:43.807 "req_id": 1 00:32:43.807 } 00:32:43.807 Got JSON-RPC error response 00:32:43.807 response: 00:32:43.807 { 00:32:43.807 "code": -17, 00:32:43.807 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:43.807 } 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.807 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.066 [2024-11-26 17:29:13.948306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:44.066 [2024-11-26 17:29:13.948386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.066 [2024-11-26 17:29:13.948409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:44.066 [2024-11-26 17:29:13.948423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.066 [2024-11-26 17:29:13.951371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.066 [2024-11-26 17:29:13.951416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:44.066 [2024-11-26 17:29:13.951529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:44.066 [2024-11-26 17:29:13.951604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:44.066 pt1 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.066 "name": "raid_bdev1", 00:32:44.066 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:44.066 "strip_size_kb": 64, 00:32:44.066 "state": "configuring", 00:32:44.066 "raid_level": "raid5f", 00:32:44.066 "superblock": true, 00:32:44.066 "num_base_bdevs": 4, 00:32:44.066 "num_base_bdevs_discovered": 1, 00:32:44.066 "num_base_bdevs_operational": 4, 00:32:44.066 "base_bdevs_list": [ 00:32:44.066 { 00:32:44.066 "name": "pt1", 00:32:44.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:44.066 "is_configured": true, 00:32:44.066 "data_offset": 2048, 00:32:44.066 "data_size": 63488 00:32:44.066 }, 00:32:44.066 { 00:32:44.066 "name": null, 00:32:44.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:44.066 "is_configured": false, 00:32:44.066 "data_offset": 2048, 00:32:44.066 "data_size": 63488 00:32:44.066 }, 00:32:44.066 { 00:32:44.066 "name": null, 00:32:44.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:44.066 "is_configured": false, 00:32:44.066 "data_offset": 2048, 00:32:44.066 "data_size": 63488 00:32:44.066 }, 00:32:44.066 { 00:32:44.066 "name": null, 00:32:44.066 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:44.066 "is_configured": false, 00:32:44.066 "data_offset": 2048, 00:32:44.066 "data_size": 63488 00:32:44.066 } 00:32:44.066 ] 00:32:44.066 }' 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.066 17:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.325 [2024-11-26 17:29:14.423772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:44.325 [2024-11-26 17:29:14.423882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.325 [2024-11-26 17:29:14.423910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:44.325 [2024-11-26 17:29:14.423925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.325 [2024-11-26 17:29:14.424479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.325 [2024-11-26 17:29:14.424505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:44.325 [2024-11-26 17:29:14.424625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:44.325 [2024-11-26 17:29:14.424659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:44.325 pt2 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.325 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.325 [2024-11-26 17:29:14.435740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.583 "name": "raid_bdev1", 00:32:44.583 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:44.583 "strip_size_kb": 64, 00:32:44.583 "state": "configuring", 00:32:44.583 "raid_level": "raid5f", 00:32:44.583 "superblock": true, 00:32:44.583 "num_base_bdevs": 4, 00:32:44.583 "num_base_bdevs_discovered": 1, 00:32:44.583 "num_base_bdevs_operational": 4, 00:32:44.583 "base_bdevs_list": [ 00:32:44.583 { 00:32:44.583 "name": "pt1", 00:32:44.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:44.583 "is_configured": true, 00:32:44.583 "data_offset": 2048, 00:32:44.583 "data_size": 63488 00:32:44.583 }, 00:32:44.583 { 00:32:44.583 "name": null, 00:32:44.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:44.583 "is_configured": false, 00:32:44.583 "data_offset": 0, 00:32:44.583 "data_size": 63488 00:32:44.583 }, 00:32:44.583 { 00:32:44.583 "name": null, 00:32:44.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:44.583 "is_configured": false, 00:32:44.583 "data_offset": 2048, 00:32:44.583 "data_size": 63488 00:32:44.583 }, 00:32:44.583 { 00:32:44.583 "name": null, 00:32:44.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:44.583 "is_configured": false, 00:32:44.583 "data_offset": 2048, 00:32:44.583 "data_size": 63488 00:32:44.583 } 00:32:44.583 ] 00:32:44.583 }' 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.583 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.843 [2024-11-26 17:29:14.915101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:44.843 [2024-11-26 17:29:14.915200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.843 [2024-11-26 17:29:14.915229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:44.843 [2024-11-26 17:29:14.915242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.843 [2024-11-26 17:29:14.915823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.843 [2024-11-26 17:29:14.915853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:44.843 [2024-11-26 17:29:14.915960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:44.843 [2024-11-26 17:29:14.915988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:44.843 pt2 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.843 [2024-11-26 17:29:14.927049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:44.843 [2024-11-26 17:29:14.927116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.843 [2024-11-26 17:29:14.927149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:44.843 [2024-11-26 17:29:14.927162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.843 [2024-11-26 17:29:14.927648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.843 [2024-11-26 17:29:14.927673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:44.843 [2024-11-26 17:29:14.927752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:44.843 [2024-11-26 17:29:14.927781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:44.843 pt3 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.843 [2024-11-26 17:29:14.934999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:44.843 [2024-11-26 17:29:14.935177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.843 [2024-11-26 17:29:14.935209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:44.843 [2024-11-26 17:29:14.935222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.843 [2024-11-26 17:29:14.935689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.843 [2024-11-26 17:29:14.935711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:44.843 [2024-11-26 17:29:14.935782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:44.843 [2024-11-26 17:29:14.935807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:44.843 [2024-11-26 17:29:14.935959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:44.843 [2024-11-26 17:29:14.935970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:44.843 [2024-11-26 17:29:14.936246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:44.843 [2024-11-26 17:29:14.944074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:44.843 [2024-11-26 17:29:14.944101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:44.843 [2024-11-26 17:29:14.944300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:44.843 pt4 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.843 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.102 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.103 17:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.103 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.103 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.103 17:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.103 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.103 "name": "raid_bdev1", 00:32:45.103 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:45.103 "strip_size_kb": 64, 00:32:45.103 "state": "online", 00:32:45.103 "raid_level": "raid5f", 00:32:45.103 "superblock": true, 00:32:45.103 "num_base_bdevs": 4, 00:32:45.103 "num_base_bdevs_discovered": 4, 00:32:45.103 "num_base_bdevs_operational": 4, 00:32:45.103 "base_bdevs_list": [ 00:32:45.103 { 00:32:45.103 "name": "pt1", 00:32:45.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:45.103 "is_configured": true, 00:32:45.103 "data_offset": 2048, 00:32:45.103 "data_size": 63488 00:32:45.103 }, 00:32:45.103 { 00:32:45.103 "name": "pt2", 00:32:45.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:45.103 "is_configured": true, 00:32:45.103 "data_offset": 2048, 00:32:45.103 "data_size": 63488 00:32:45.103 }, 00:32:45.103 { 00:32:45.103 "name": "pt3", 00:32:45.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:45.103 "is_configured": true, 00:32:45.103 "data_offset": 2048, 00:32:45.103 "data_size": 63488 00:32:45.103 }, 00:32:45.103 { 00:32:45.103 "name": "pt4", 00:32:45.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:45.103 "is_configured": true, 00:32:45.103 "data_offset": 2048, 00:32:45.103 "data_size": 63488 00:32:45.103 } 00:32:45.103 ] 00:32:45.103 }' 00:32:45.103 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.103 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.362 [2024-11-26 17:29:15.374028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.362 "name": "raid_bdev1", 00:32:45.362 "aliases": [ 00:32:45.362 "97d4ed85-b2f1-42c6-978b-7e98b34d3215" 00:32:45.362 ], 00:32:45.362 "product_name": "Raid Volume", 00:32:45.362 "block_size": 512, 00:32:45.362 "num_blocks": 190464, 00:32:45.362 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:45.362 "assigned_rate_limits": { 00:32:45.362 "rw_ios_per_sec": 0, 00:32:45.362 "rw_mbytes_per_sec": 0, 00:32:45.362 "r_mbytes_per_sec": 0, 00:32:45.362 "w_mbytes_per_sec": 0 00:32:45.362 }, 00:32:45.362 "claimed": false, 00:32:45.362 "zoned": false, 00:32:45.362 "supported_io_types": { 00:32:45.362 "read": true, 00:32:45.362 "write": true, 00:32:45.362 "unmap": false, 00:32:45.362 "flush": false, 00:32:45.362 "reset": true, 00:32:45.362 "nvme_admin": false, 00:32:45.362 "nvme_io": false, 00:32:45.362 "nvme_io_md": false, 00:32:45.362 "write_zeroes": true, 00:32:45.362 "zcopy": false, 00:32:45.362 "get_zone_info": false, 00:32:45.362 "zone_management": false, 00:32:45.362 "zone_append": false, 00:32:45.362 "compare": false, 00:32:45.362 "compare_and_write": false, 00:32:45.362 "abort": false, 00:32:45.362 "seek_hole": false, 00:32:45.362 "seek_data": false, 00:32:45.362 "copy": false, 00:32:45.362 "nvme_iov_md": false 00:32:45.362 }, 00:32:45.362 "driver_specific": { 00:32:45.362 "raid": { 00:32:45.362 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:45.362 "strip_size_kb": 64, 00:32:45.362 "state": "online", 00:32:45.362 "raid_level": "raid5f", 00:32:45.362 "superblock": true, 00:32:45.362 "num_base_bdevs": 4, 00:32:45.362 "num_base_bdevs_discovered": 4, 00:32:45.362 "num_base_bdevs_operational": 4, 00:32:45.362 "base_bdevs_list": [ 00:32:45.362 { 00:32:45.362 "name": "pt1", 00:32:45.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:45.362 "is_configured": true, 00:32:45.362 "data_offset": 2048, 00:32:45.362 "data_size": 63488 00:32:45.362 }, 00:32:45.362 { 00:32:45.362 "name": "pt2", 00:32:45.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:45.362 "is_configured": true, 00:32:45.362 "data_offset": 2048, 00:32:45.362 "data_size": 63488 00:32:45.362 }, 00:32:45.362 { 00:32:45.362 "name": "pt3", 00:32:45.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:45.362 "is_configured": true, 00:32:45.362 "data_offset": 2048, 00:32:45.362 "data_size": 63488 00:32:45.362 }, 00:32:45.362 { 00:32:45.362 "name": "pt4", 00:32:45.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:45.362 "is_configured": true, 00:32:45.362 "data_offset": 2048, 00:32:45.362 "data_size": 63488 00:32:45.362 } 00:32:45.362 ] 00:32:45.362 } 00:32:45.362 } 00:32:45.362 }' 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:45.362 pt2 00:32:45.362 pt3 00:32:45.362 pt4' 00:32:45.362 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:45.621 [2024-11-26 17:29:15.701950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:45.621 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 97d4ed85-b2f1-42c6-978b-7e98b34d3215 '!=' 97d4ed85-b2f1-42c6-978b-7e98b34d3215 ']' 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.880 [2024-11-26 17:29:15.745872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.880 "name": "raid_bdev1", 00:32:45.880 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:45.880 "strip_size_kb": 64, 00:32:45.880 "state": "online", 00:32:45.880 "raid_level": "raid5f", 00:32:45.880 "superblock": true, 00:32:45.880 "num_base_bdevs": 4, 00:32:45.880 "num_base_bdevs_discovered": 3, 00:32:45.880 "num_base_bdevs_operational": 3, 00:32:45.880 "base_bdevs_list": [ 00:32:45.880 { 00:32:45.880 "name": null, 00:32:45.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.880 "is_configured": false, 00:32:45.880 "data_offset": 0, 00:32:45.880 "data_size": 63488 00:32:45.880 }, 00:32:45.880 { 00:32:45.880 "name": "pt2", 00:32:45.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:45.880 "is_configured": true, 00:32:45.880 "data_offset": 2048, 00:32:45.880 "data_size": 63488 00:32:45.880 }, 00:32:45.880 { 00:32:45.880 "name": "pt3", 00:32:45.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:45.880 "is_configured": true, 00:32:45.880 "data_offset": 2048, 00:32:45.880 "data_size": 63488 00:32:45.880 }, 00:32:45.880 { 00:32:45.880 "name": "pt4", 00:32:45.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:45.880 "is_configured": true, 00:32:45.880 "data_offset": 2048, 00:32:45.880 "data_size": 63488 00:32:45.880 } 00:32:45.880 ] 00:32:45.880 }' 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.880 17:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.139 [2024-11-26 17:29:16.181801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:46.139 [2024-11-26 17:29:16.181854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.139 [2024-11-26 17:29:16.181957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.139 [2024-11-26 17:29:16.182051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:46.139 [2024-11-26 17:29:16.182064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.139 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.140 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.398 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.399 [2024-11-26 17:29:16.281749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:46.399 [2024-11-26 17:29:16.281830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:46.399 [2024-11-26 17:29:16.281856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:32:46.399 [2024-11-26 17:29:16.281868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:46.399 [2024-11-26 17:29:16.284824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:46.399 [2024-11-26 17:29:16.284867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:46.399 [2024-11-26 17:29:16.284967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:46.399 [2024-11-26 17:29:16.285023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:46.399 pt2 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.399 "name": "raid_bdev1", 00:32:46.399 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:46.399 "strip_size_kb": 64, 00:32:46.399 "state": "configuring", 00:32:46.399 "raid_level": "raid5f", 00:32:46.399 "superblock": true, 00:32:46.399 "num_base_bdevs": 4, 00:32:46.399 "num_base_bdevs_discovered": 1, 00:32:46.399 "num_base_bdevs_operational": 3, 00:32:46.399 "base_bdevs_list": [ 00:32:46.399 { 00:32:46.399 "name": null, 00:32:46.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.399 "is_configured": false, 00:32:46.399 "data_offset": 2048, 00:32:46.399 "data_size": 63488 00:32:46.399 }, 00:32:46.399 { 00:32:46.399 "name": "pt2", 00:32:46.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:46.399 "is_configured": true, 00:32:46.399 "data_offset": 2048, 00:32:46.399 "data_size": 63488 00:32:46.399 }, 00:32:46.399 { 00:32:46.399 "name": null, 00:32:46.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:46.399 "is_configured": false, 00:32:46.399 "data_offset": 2048, 00:32:46.399 "data_size": 63488 00:32:46.399 }, 00:32:46.399 { 00:32:46.399 "name": null, 00:32:46.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:46.399 "is_configured": false, 00:32:46.399 "data_offset": 2048, 00:32:46.399 "data_size": 63488 00:32:46.399 } 00:32:46.399 ] 00:32:46.399 }' 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.399 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.657 [2024-11-26 17:29:16.745808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:46.657 [2024-11-26 17:29:16.746108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:46.657 [2024-11-26 17:29:16.746154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:32:46.657 [2024-11-26 17:29:16.746169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:46.657 [2024-11-26 17:29:16.746784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:46.657 [2024-11-26 17:29:16.746816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:46.657 [2024-11-26 17:29:16.746935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:46.657 [2024-11-26 17:29:16.746965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:46.657 pt3 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:46.657 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.658 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.927 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.927 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.927 "name": "raid_bdev1", 00:32:46.927 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:46.927 "strip_size_kb": 64, 00:32:46.927 "state": "configuring", 00:32:46.927 "raid_level": "raid5f", 00:32:46.927 "superblock": true, 00:32:46.927 "num_base_bdevs": 4, 00:32:46.927 "num_base_bdevs_discovered": 2, 00:32:46.927 "num_base_bdevs_operational": 3, 00:32:46.927 "base_bdevs_list": [ 00:32:46.927 { 00:32:46.927 "name": null, 00:32:46.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.927 "is_configured": false, 00:32:46.927 "data_offset": 2048, 00:32:46.927 "data_size": 63488 00:32:46.927 }, 00:32:46.927 { 00:32:46.927 "name": "pt2", 00:32:46.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:46.927 "is_configured": true, 00:32:46.927 "data_offset": 2048, 00:32:46.927 "data_size": 63488 00:32:46.927 }, 00:32:46.927 { 00:32:46.927 "name": "pt3", 00:32:46.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:46.927 "is_configured": true, 00:32:46.927 "data_offset": 2048, 00:32:46.927 "data_size": 63488 00:32:46.927 }, 00:32:46.927 { 00:32:46.927 "name": null, 00:32:46.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:46.927 "is_configured": false, 00:32:46.927 "data_offset": 2048, 00:32:46.927 "data_size": 63488 00:32:46.927 } 00:32:46.927 ] 00:32:46.927 }' 00:32:46.927 17:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.927 17:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 [2024-11-26 17:29:17.189843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:47.186 [2024-11-26 17:29:17.189946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.186 [2024-11-26 17:29:17.189977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:32:47.186 [2024-11-26 17:29:17.189990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.186 [2024-11-26 17:29:17.190565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.186 [2024-11-26 17:29:17.190592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:47.186 [2024-11-26 17:29:17.190699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:47.186 [2024-11-26 17:29:17.190745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:47.186 [2024-11-26 17:29:17.190897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:47.186 [2024-11-26 17:29:17.190908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:47.186 [2024-11-26 17:29:17.191225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:47.186 [2024-11-26 17:29:17.199114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:47.186 [2024-11-26 17:29:17.199147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:47.186 [2024-11-26 17:29:17.199512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.186 pt4 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.186 "name": "raid_bdev1", 00:32:47.186 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:47.186 "strip_size_kb": 64, 00:32:47.186 "state": "online", 00:32:47.186 "raid_level": "raid5f", 00:32:47.186 "superblock": true, 00:32:47.186 "num_base_bdevs": 4, 00:32:47.186 "num_base_bdevs_discovered": 3, 00:32:47.186 "num_base_bdevs_operational": 3, 00:32:47.186 "base_bdevs_list": [ 00:32:47.186 { 00:32:47.186 "name": null, 00:32:47.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.186 "is_configured": false, 00:32:47.186 "data_offset": 2048, 00:32:47.186 "data_size": 63488 00:32:47.186 }, 00:32:47.186 { 00:32:47.186 "name": "pt2", 00:32:47.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.186 "is_configured": true, 00:32:47.186 "data_offset": 2048, 00:32:47.186 "data_size": 63488 00:32:47.186 }, 00:32:47.186 { 00:32:47.186 "name": "pt3", 00:32:47.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.186 "is_configured": true, 00:32:47.186 "data_offset": 2048, 00:32:47.186 "data_size": 63488 00:32:47.186 }, 00:32:47.186 { 00:32:47.186 "name": "pt4", 00:32:47.186 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:47.186 "is_configured": true, 00:32:47.186 "data_offset": 2048, 00:32:47.186 "data_size": 63488 00:32:47.186 } 00:32:47.186 ] 00:32:47.186 }' 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.186 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 [2024-11-26 17:29:17.641814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:47.752 [2024-11-26 17:29:17.641865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:47.752 [2024-11-26 17:29:17.641970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:47.752 [2024-11-26 17:29:17.642062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:47.752 [2024-11-26 17:29:17.642080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 [2024-11-26 17:29:17.713767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:47.752 [2024-11-26 17:29:17.713852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.752 [2024-11-26 17:29:17.713885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:32:47.752 [2024-11-26 17:29:17.713906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.752 [2024-11-26 17:29:17.717169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.752 [2024-11-26 17:29:17.717217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:47.752 [2024-11-26 17:29:17.717319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:47.752 [2024-11-26 17:29:17.717381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:47.752 [2024-11-26 17:29:17.717560] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:47.752 [2024-11-26 17:29:17.717579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:47.752 [2024-11-26 17:29:17.717599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:47.752 [2024-11-26 17:29:17.717747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:47.752 [2024-11-26 17:29:17.717880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:47.752 pt1 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.752 "name": "raid_bdev1", 00:32:47.752 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:47.752 "strip_size_kb": 64, 00:32:47.752 "state": "configuring", 00:32:47.752 "raid_level": "raid5f", 00:32:47.752 "superblock": true, 00:32:47.752 "num_base_bdevs": 4, 00:32:47.752 "num_base_bdevs_discovered": 2, 00:32:47.752 "num_base_bdevs_operational": 3, 00:32:47.752 "base_bdevs_list": [ 00:32:47.752 { 00:32:47.752 "name": null, 00:32:47.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.752 "is_configured": false, 00:32:47.752 "data_offset": 2048, 00:32:47.752 "data_size": 63488 00:32:47.752 }, 00:32:47.752 { 00:32:47.752 "name": "pt2", 00:32:47.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.752 "is_configured": true, 00:32:47.752 "data_offset": 2048, 00:32:47.752 "data_size": 63488 00:32:47.752 }, 00:32:47.752 { 00:32:47.752 "name": "pt3", 00:32:47.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.752 "is_configured": true, 00:32:47.752 "data_offset": 2048, 00:32:47.752 "data_size": 63488 00:32:47.752 }, 00:32:47.752 { 00:32:47.752 "name": null, 00:32:47.752 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:47.752 "is_configured": false, 00:32:47.752 "data_offset": 2048, 00:32:47.752 "data_size": 63488 00:32:47.752 } 00:32:47.752 ] 00:32:47.752 }' 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.752 17:29:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.319 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.319 [2024-11-26 17:29:18.193809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:48.319 [2024-11-26 17:29:18.193907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.319 [2024-11-26 17:29:18.193940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:32:48.319 [2024-11-26 17:29:18.193955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.319 [2024-11-26 17:29:18.194565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.319 [2024-11-26 17:29:18.194590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:48.319 [2024-11-26 17:29:18.194697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:48.319 [2024-11-26 17:29:18.194731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:48.319 [2024-11-26 17:29:18.194920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:48.319 [2024-11-26 17:29:18.194932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:48.319 [2024-11-26 17:29:18.195272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:48.319 [2024-11-26 17:29:18.204427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:48.320 [2024-11-26 17:29:18.204467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:48.320 [2024-11-26 17:29:18.204790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.320 pt4 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.320 "name": "raid_bdev1", 00:32:48.320 "uuid": "97d4ed85-b2f1-42c6-978b-7e98b34d3215", 00:32:48.320 "strip_size_kb": 64, 00:32:48.320 "state": "online", 00:32:48.320 "raid_level": "raid5f", 00:32:48.320 "superblock": true, 00:32:48.320 "num_base_bdevs": 4, 00:32:48.320 "num_base_bdevs_discovered": 3, 00:32:48.320 "num_base_bdevs_operational": 3, 00:32:48.320 "base_bdevs_list": [ 00:32:48.320 { 00:32:48.320 "name": null, 00:32:48.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.320 "is_configured": false, 00:32:48.320 "data_offset": 2048, 00:32:48.320 "data_size": 63488 00:32:48.320 }, 00:32:48.320 { 00:32:48.320 "name": "pt2", 00:32:48.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.320 "is_configured": true, 00:32:48.320 "data_offset": 2048, 00:32:48.320 "data_size": 63488 00:32:48.320 }, 00:32:48.320 { 00:32:48.320 "name": "pt3", 00:32:48.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.320 "is_configured": true, 00:32:48.320 "data_offset": 2048, 00:32:48.320 "data_size": 63488 00:32:48.320 }, 00:32:48.320 { 00:32:48.320 "name": "pt4", 00:32:48.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.320 "is_configured": true, 00:32:48.320 "data_offset": 2048, 00:32:48.320 "data_size": 63488 00:32:48.320 } 00:32:48.320 ] 00:32:48.320 }' 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.320 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.579 [2024-11-26 17:29:18.663106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.579 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 97d4ed85-b2f1-42c6-978b-7e98b34d3215 '!=' 97d4ed85-b2f1-42c6-978b-7e98b34d3215 ']' 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84275 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84275 ']' 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84275 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84275 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.837 killing process with pid 84275 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84275' 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84275 00:32:48.837 [2024-11-26 17:29:18.756580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:48.837 17:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84275 00:32:48.837 [2024-11-26 17:29:18.756730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:48.837 [2024-11-26 17:29:18.756828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:48.837 [2024-11-26 17:29:18.756851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:49.097 [2024-11-26 17:29:19.186760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:50.482 17:29:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:50.482 00:32:50.482 real 0m8.726s 00:32:50.482 user 0m13.486s 00:32:50.482 sys 0m1.994s 00:32:50.482 ************************************ 00:32:50.482 END TEST raid5f_superblock_test 00:32:50.482 ************************************ 00:32:50.482 17:29:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.482 17:29:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.482 17:29:20 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:32:50.482 17:29:20 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:32:50.482 17:29:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:50.482 17:29:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.482 17:29:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:50.482 ************************************ 00:32:50.482 START TEST raid5f_rebuild_test 00:32:50.482 ************************************ 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:50.482 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:32:50.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84767 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84767 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84767 ']' 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:50.483 17:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.742 [2024-11-26 17:29:20.637301] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:32:50.742 [2024-11-26 17:29:20.637707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:32:50.742 Zero copy mechanism will not be used. 00:32:50.742 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84767 ] 00:32:50.742 [2024-11-26 17:29:20.825635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.004 [2024-11-26 17:29:20.971133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.264 [2024-11-26 17:29:21.205107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.264 [2024-11-26 17:29:21.205395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 BaseBdev1_malloc 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 [2024-11-26 17:29:21.553388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:51.523 [2024-11-26 17:29:21.553486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.523 [2024-11-26 17:29:21.553534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:51.523 [2024-11-26 17:29:21.553553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.523 [2024-11-26 17:29:21.556451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.523 [2024-11-26 17:29:21.556507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:51.523 BaseBdev1 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 BaseBdev2_malloc 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 [2024-11-26 17:29:21.615612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:51.523 [2024-11-26 17:29:21.615713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.523 [2024-11-26 17:29:21.615750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:51.523 [2024-11-26 17:29:21.615766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.523 [2024-11-26 17:29:21.618609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.523 [2024-11-26 17:29:21.618670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:51.523 BaseBdev2 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.523 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.782 BaseBdev3_malloc 00:32:51.782 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.782 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:51.782 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.782 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.782 [2024-11-26 17:29:21.684430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:51.782 [2024-11-26 17:29:21.684779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.783 [2024-11-26 17:29:21.684827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:51.783 [2024-11-26 17:29:21.684849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.783 [2024-11-26 17:29:21.688038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.783 [2024-11-26 17:29:21.688259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:51.783 BaseBdev3 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 BaseBdev4_malloc 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 [2024-11-26 17:29:21.746760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:51.783 [2024-11-26 17:29:21.746862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.783 [2024-11-26 17:29:21.746892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:51.783 [2024-11-26 17:29:21.746908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.783 [2024-11-26 17:29:21.749742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.783 [2024-11-26 17:29:21.749991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:51.783 BaseBdev4 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 spare_malloc 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 spare_delay 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 [2024-11-26 17:29:21.819850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:51.783 [2024-11-26 17:29:21.819938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.783 [2024-11-26 17:29:21.819968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:51.783 [2024-11-26 17:29:21.819985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.783 [2024-11-26 17:29:21.822974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.783 [2024-11-26 17:29:21.823244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:51.783 spare 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 [2024-11-26 17:29:21.832019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.783 [2024-11-26 17:29:21.834519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:51.783 [2024-11-26 17:29:21.834749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:51.783 [2024-11-26 17:29:21.834823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:51.783 [2024-11-26 17:29:21.834926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:51.783 [2024-11-26 17:29:21.834943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:51.783 [2024-11-26 17:29:21.835282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:51.783 [2024-11-26 17:29:21.843798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:51.783 [2024-11-26 17:29:21.843823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:51.783 [2024-11-26 17:29:21.844084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.783 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.042 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.042 "name": "raid_bdev1", 00:32:52.042 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:52.042 "strip_size_kb": 64, 00:32:52.042 "state": "online", 00:32:52.042 "raid_level": "raid5f", 00:32:52.042 "superblock": false, 00:32:52.042 "num_base_bdevs": 4, 00:32:52.042 "num_base_bdevs_discovered": 4, 00:32:52.042 "num_base_bdevs_operational": 4, 00:32:52.042 "base_bdevs_list": [ 00:32:52.042 { 00:32:52.042 "name": "BaseBdev1", 00:32:52.042 "uuid": "df102b3d-14d3-50ec-8bee-322dba598896", 00:32:52.042 "is_configured": true, 00:32:52.042 "data_offset": 0, 00:32:52.042 "data_size": 65536 00:32:52.042 }, 00:32:52.042 { 00:32:52.042 "name": "BaseBdev2", 00:32:52.042 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:52.042 "is_configured": true, 00:32:52.042 "data_offset": 0, 00:32:52.042 "data_size": 65536 00:32:52.042 }, 00:32:52.042 { 00:32:52.042 "name": "BaseBdev3", 00:32:52.042 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:52.042 "is_configured": true, 00:32:52.042 "data_offset": 0, 00:32:52.042 "data_size": 65536 00:32:52.042 }, 00:32:52.042 { 00:32:52.042 "name": "BaseBdev4", 00:32:52.042 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:52.042 "is_configured": true, 00:32:52.042 "data_offset": 0, 00:32:52.042 "data_size": 65536 00:32:52.042 } 00:32:52.042 ] 00:32:52.042 }' 00:32:52.042 17:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.042 17:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.301 [2024-11-26 17:29:22.314052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:52.301 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:52.560 [2024-11-26 17:29:22.625872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:52.560 /dev/nbd0 00:32:52.560 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:52.818 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:52.818 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:52.818 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:32:52.818 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:52.818 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:52.819 1+0 records in 00:32:52.819 1+0 records out 00:32:52.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405302 s, 10.1 MB/s 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:32:52.819 17:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:32:53.386 512+0 records in 00:32:53.386 512+0 records out 00:32:53.386 100663296 bytes (101 MB, 96 MiB) copied, 0.561743 s, 179 MB/s 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.386 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:53.645 [2024-11-26 17:29:23.516306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.645 [2024-11-26 17:29:23.535245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:53.645 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.646 "name": "raid_bdev1", 00:32:53.646 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:53.646 "strip_size_kb": 64, 00:32:53.646 "state": "online", 00:32:53.646 "raid_level": "raid5f", 00:32:53.646 "superblock": false, 00:32:53.646 "num_base_bdevs": 4, 00:32:53.646 "num_base_bdevs_discovered": 3, 00:32:53.646 "num_base_bdevs_operational": 3, 00:32:53.646 "base_bdevs_list": [ 00:32:53.646 { 00:32:53.646 "name": null, 00:32:53.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.646 "is_configured": false, 00:32:53.646 "data_offset": 0, 00:32:53.646 "data_size": 65536 00:32:53.646 }, 00:32:53.646 { 00:32:53.646 "name": "BaseBdev2", 00:32:53.646 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:53.646 "is_configured": true, 00:32:53.646 "data_offset": 0, 00:32:53.646 "data_size": 65536 00:32:53.646 }, 00:32:53.646 { 00:32:53.646 "name": "BaseBdev3", 00:32:53.646 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:53.646 "is_configured": true, 00:32:53.646 "data_offset": 0, 00:32:53.646 "data_size": 65536 00:32:53.646 }, 00:32:53.646 { 00:32:53.646 "name": "BaseBdev4", 00:32:53.646 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:53.646 "is_configured": true, 00:32:53.646 "data_offset": 0, 00:32:53.646 "data_size": 65536 00:32:53.646 } 00:32:53.646 ] 00:32:53.646 }' 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.646 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.904 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:53.904 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.904 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.904 [2024-11-26 17:29:23.966722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:53.904 [2024-11-26 17:29:23.986403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:32:53.904 17:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.904 17:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:53.904 [2024-11-26 17:29:23.999108] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.292 17:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:55.292 "name": "raid_bdev1", 00:32:55.292 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:55.292 "strip_size_kb": 64, 00:32:55.292 "state": "online", 00:32:55.292 "raid_level": "raid5f", 00:32:55.292 "superblock": false, 00:32:55.292 "num_base_bdevs": 4, 00:32:55.292 "num_base_bdevs_discovered": 4, 00:32:55.292 "num_base_bdevs_operational": 4, 00:32:55.292 "process": { 00:32:55.292 "type": "rebuild", 00:32:55.292 "target": "spare", 00:32:55.292 "progress": { 00:32:55.292 "blocks": 17280, 00:32:55.292 "percent": 8 00:32:55.292 } 00:32:55.292 }, 00:32:55.292 "base_bdevs_list": [ 00:32:55.292 { 00:32:55.292 "name": "spare", 00:32:55.292 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:32:55.292 "is_configured": true, 00:32:55.292 "data_offset": 0, 00:32:55.292 "data_size": 65536 00:32:55.292 }, 00:32:55.292 { 00:32:55.292 "name": "BaseBdev2", 00:32:55.292 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:55.292 "is_configured": true, 00:32:55.292 "data_offset": 0, 00:32:55.292 "data_size": 65536 00:32:55.292 }, 00:32:55.292 { 00:32:55.292 "name": "BaseBdev3", 00:32:55.292 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:55.292 "is_configured": true, 00:32:55.292 "data_offset": 0, 00:32:55.292 "data_size": 65536 00:32:55.292 }, 00:32:55.292 { 00:32:55.292 "name": "BaseBdev4", 00:32:55.292 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:55.292 "is_configured": true, 00:32:55.292 "data_offset": 0, 00:32:55.292 "data_size": 65536 00:32:55.292 } 00:32:55.292 ] 00:32:55.292 }' 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.292 [2024-11-26 17:29:25.138746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:55.292 [2024-11-26 17:29:25.209570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:55.292 [2024-11-26 17:29:25.209703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.292 [2024-11-26 17:29:25.209731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:55.292 [2024-11-26 17:29:25.209748] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:55.292 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.293 "name": "raid_bdev1", 00:32:55.293 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:55.293 "strip_size_kb": 64, 00:32:55.293 "state": "online", 00:32:55.293 "raid_level": "raid5f", 00:32:55.293 "superblock": false, 00:32:55.293 "num_base_bdevs": 4, 00:32:55.293 "num_base_bdevs_discovered": 3, 00:32:55.293 "num_base_bdevs_operational": 3, 00:32:55.293 "base_bdevs_list": [ 00:32:55.293 { 00:32:55.293 "name": null, 00:32:55.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.293 "is_configured": false, 00:32:55.293 "data_offset": 0, 00:32:55.293 "data_size": 65536 00:32:55.293 }, 00:32:55.293 { 00:32:55.293 "name": "BaseBdev2", 00:32:55.293 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:55.293 "is_configured": true, 00:32:55.293 "data_offset": 0, 00:32:55.293 "data_size": 65536 00:32:55.293 }, 00:32:55.293 { 00:32:55.293 "name": "BaseBdev3", 00:32:55.293 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:55.293 "is_configured": true, 00:32:55.293 "data_offset": 0, 00:32:55.293 "data_size": 65536 00:32:55.293 }, 00:32:55.293 { 00:32:55.293 "name": "BaseBdev4", 00:32:55.293 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:55.293 "is_configured": true, 00:32:55.293 "data_offset": 0, 00:32:55.293 "data_size": 65536 00:32:55.293 } 00:32:55.293 ] 00:32:55.293 }' 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.293 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:55.860 "name": "raid_bdev1", 00:32:55.860 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:55.860 "strip_size_kb": 64, 00:32:55.860 "state": "online", 00:32:55.860 "raid_level": "raid5f", 00:32:55.860 "superblock": false, 00:32:55.860 "num_base_bdevs": 4, 00:32:55.860 "num_base_bdevs_discovered": 3, 00:32:55.860 "num_base_bdevs_operational": 3, 00:32:55.860 "base_bdevs_list": [ 00:32:55.860 { 00:32:55.860 "name": null, 00:32:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.860 "is_configured": false, 00:32:55.860 "data_offset": 0, 00:32:55.860 "data_size": 65536 00:32:55.860 }, 00:32:55.860 { 00:32:55.860 "name": "BaseBdev2", 00:32:55.860 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:55.860 "is_configured": true, 00:32:55.860 "data_offset": 0, 00:32:55.860 "data_size": 65536 00:32:55.860 }, 00:32:55.860 { 00:32:55.860 "name": "BaseBdev3", 00:32:55.860 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:55.860 "is_configured": true, 00:32:55.860 "data_offset": 0, 00:32:55.860 "data_size": 65536 00:32:55.860 }, 00:32:55.860 { 00:32:55.860 "name": "BaseBdev4", 00:32:55.860 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:55.860 "is_configured": true, 00:32:55.860 "data_offset": 0, 00:32:55.860 "data_size": 65536 00:32:55.860 } 00:32:55.860 ] 00:32:55.860 }' 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.860 [2024-11-26 17:29:25.826195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:55.860 [2024-11-26 17:29:25.844441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.860 17:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:55.860 [2024-11-26 17:29:25.856212] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:56.794 "name": "raid_bdev1", 00:32:56.794 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:56.794 "strip_size_kb": 64, 00:32:56.794 "state": "online", 00:32:56.794 "raid_level": "raid5f", 00:32:56.794 "superblock": false, 00:32:56.794 "num_base_bdevs": 4, 00:32:56.794 "num_base_bdevs_discovered": 4, 00:32:56.794 "num_base_bdevs_operational": 4, 00:32:56.794 "process": { 00:32:56.794 "type": "rebuild", 00:32:56.794 "target": "spare", 00:32:56.794 "progress": { 00:32:56.794 "blocks": 17280, 00:32:56.794 "percent": 8 00:32:56.794 } 00:32:56.794 }, 00:32:56.794 "base_bdevs_list": [ 00:32:56.794 { 00:32:56.794 "name": "spare", 00:32:56.794 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:32:56.794 "is_configured": true, 00:32:56.794 "data_offset": 0, 00:32:56.794 "data_size": 65536 00:32:56.794 }, 00:32:56.794 { 00:32:56.794 "name": "BaseBdev2", 00:32:56.794 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:56.794 "is_configured": true, 00:32:56.794 "data_offset": 0, 00:32:56.794 "data_size": 65536 00:32:56.794 }, 00:32:56.794 { 00:32:56.794 "name": "BaseBdev3", 00:32:56.794 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:56.794 "is_configured": true, 00:32:56.794 "data_offset": 0, 00:32:56.794 "data_size": 65536 00:32:56.794 }, 00:32:56.794 { 00:32:56.794 "name": "BaseBdev4", 00:32:56.794 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:56.794 "is_configured": true, 00:32:56.794 "data_offset": 0, 00:32:56.794 "data_size": 65536 00:32:56.794 } 00:32:56.794 ] 00:32:56.794 }' 00:32:56.794 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.053 17:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.053 17:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.053 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:57.053 "name": "raid_bdev1", 00:32:57.053 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:57.053 "strip_size_kb": 64, 00:32:57.053 "state": "online", 00:32:57.053 "raid_level": "raid5f", 00:32:57.053 "superblock": false, 00:32:57.053 "num_base_bdevs": 4, 00:32:57.053 "num_base_bdevs_discovered": 4, 00:32:57.053 "num_base_bdevs_operational": 4, 00:32:57.053 "process": { 00:32:57.053 "type": "rebuild", 00:32:57.053 "target": "spare", 00:32:57.053 "progress": { 00:32:57.053 "blocks": 21120, 00:32:57.053 "percent": 10 00:32:57.053 } 00:32:57.053 }, 00:32:57.053 "base_bdevs_list": [ 00:32:57.053 { 00:32:57.053 "name": "spare", 00:32:57.053 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:32:57.053 "is_configured": true, 00:32:57.053 "data_offset": 0, 00:32:57.053 "data_size": 65536 00:32:57.053 }, 00:32:57.053 { 00:32:57.053 "name": "BaseBdev2", 00:32:57.053 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:57.053 "is_configured": true, 00:32:57.053 "data_offset": 0, 00:32:57.053 "data_size": 65536 00:32:57.053 }, 00:32:57.053 { 00:32:57.053 "name": "BaseBdev3", 00:32:57.053 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:57.053 "is_configured": true, 00:32:57.053 "data_offset": 0, 00:32:57.053 "data_size": 65536 00:32:57.053 }, 00:32:57.053 { 00:32:57.053 "name": "BaseBdev4", 00:32:57.054 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:57.054 "is_configured": true, 00:32:57.054 "data_offset": 0, 00:32:57.054 "data_size": 65536 00:32:57.054 } 00:32:57.054 ] 00:32:57.054 }' 00:32:57.054 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:57.054 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.054 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:57.054 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.054 17:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:58.430 "name": "raid_bdev1", 00:32:58.430 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:58.430 "strip_size_kb": 64, 00:32:58.430 "state": "online", 00:32:58.430 "raid_level": "raid5f", 00:32:58.430 "superblock": false, 00:32:58.430 "num_base_bdevs": 4, 00:32:58.430 "num_base_bdevs_discovered": 4, 00:32:58.430 "num_base_bdevs_operational": 4, 00:32:58.430 "process": { 00:32:58.430 "type": "rebuild", 00:32:58.430 "target": "spare", 00:32:58.430 "progress": { 00:32:58.430 "blocks": 42240, 00:32:58.430 "percent": 21 00:32:58.430 } 00:32:58.430 }, 00:32:58.430 "base_bdevs_list": [ 00:32:58.430 { 00:32:58.430 "name": "spare", 00:32:58.430 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:32:58.430 "is_configured": true, 00:32:58.430 "data_offset": 0, 00:32:58.430 "data_size": 65536 00:32:58.430 }, 00:32:58.430 { 00:32:58.430 "name": "BaseBdev2", 00:32:58.430 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:58.430 "is_configured": true, 00:32:58.430 "data_offset": 0, 00:32:58.430 "data_size": 65536 00:32:58.430 }, 00:32:58.430 { 00:32:58.430 "name": "BaseBdev3", 00:32:58.430 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:58.430 "is_configured": true, 00:32:58.430 "data_offset": 0, 00:32:58.430 "data_size": 65536 00:32:58.430 }, 00:32:58.430 { 00:32:58.430 "name": "BaseBdev4", 00:32:58.430 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:58.430 "is_configured": true, 00:32:58.430 "data_offset": 0, 00:32:58.430 "data_size": 65536 00:32:58.430 } 00:32:58.430 ] 00:32:58.430 }' 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.430 17:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.394 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:59.394 "name": "raid_bdev1", 00:32:59.394 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:32:59.394 "strip_size_kb": 64, 00:32:59.394 "state": "online", 00:32:59.394 "raid_level": "raid5f", 00:32:59.394 "superblock": false, 00:32:59.394 "num_base_bdevs": 4, 00:32:59.394 "num_base_bdevs_discovered": 4, 00:32:59.394 "num_base_bdevs_operational": 4, 00:32:59.394 "process": { 00:32:59.394 "type": "rebuild", 00:32:59.394 "target": "spare", 00:32:59.394 "progress": { 00:32:59.394 "blocks": 63360, 00:32:59.394 "percent": 32 00:32:59.394 } 00:32:59.394 }, 00:32:59.394 "base_bdevs_list": [ 00:32:59.394 { 00:32:59.394 "name": "spare", 00:32:59.394 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:32:59.394 "is_configured": true, 00:32:59.394 "data_offset": 0, 00:32:59.394 "data_size": 65536 00:32:59.394 }, 00:32:59.394 { 00:32:59.394 "name": "BaseBdev2", 00:32:59.394 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:32:59.394 "is_configured": true, 00:32:59.394 "data_offset": 0, 00:32:59.394 "data_size": 65536 00:32:59.394 }, 00:32:59.394 { 00:32:59.395 "name": "BaseBdev3", 00:32:59.395 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:32:59.395 "is_configured": true, 00:32:59.395 "data_offset": 0, 00:32:59.395 "data_size": 65536 00:32:59.395 }, 00:32:59.395 { 00:32:59.395 "name": "BaseBdev4", 00:32:59.395 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:32:59.395 "is_configured": true, 00:32:59.395 "data_offset": 0, 00:32:59.395 "data_size": 65536 00:32:59.395 } 00:32:59.395 ] 00:32:59.395 }' 00:32:59.395 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:59.395 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:59.395 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:59.395 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:59.395 17:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.447 "name": "raid_bdev1", 00:33:00.447 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:00.447 "strip_size_kb": 64, 00:33:00.447 "state": "online", 00:33:00.447 "raid_level": "raid5f", 00:33:00.447 "superblock": false, 00:33:00.447 "num_base_bdevs": 4, 00:33:00.447 "num_base_bdevs_discovered": 4, 00:33:00.447 "num_base_bdevs_operational": 4, 00:33:00.447 "process": { 00:33:00.447 "type": "rebuild", 00:33:00.447 "target": "spare", 00:33:00.447 "progress": { 00:33:00.447 "blocks": 86400, 00:33:00.447 "percent": 43 00:33:00.447 } 00:33:00.447 }, 00:33:00.447 "base_bdevs_list": [ 00:33:00.447 { 00:33:00.447 "name": "spare", 00:33:00.447 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:00.447 "is_configured": true, 00:33:00.447 "data_offset": 0, 00:33:00.447 "data_size": 65536 00:33:00.447 }, 00:33:00.447 { 00:33:00.447 "name": "BaseBdev2", 00:33:00.447 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:00.447 "is_configured": true, 00:33:00.447 "data_offset": 0, 00:33:00.447 "data_size": 65536 00:33:00.447 }, 00:33:00.447 { 00:33:00.447 "name": "BaseBdev3", 00:33:00.447 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:00.447 "is_configured": true, 00:33:00.447 "data_offset": 0, 00:33:00.447 "data_size": 65536 00:33:00.447 }, 00:33:00.447 { 00:33:00.447 "name": "BaseBdev4", 00:33:00.447 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:00.447 "is_configured": true, 00:33:00.447 "data_offset": 0, 00:33:00.447 "data_size": 65536 00:33:00.447 } 00:33:00.447 ] 00:33:00.447 }' 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.447 17:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:01.834 "name": "raid_bdev1", 00:33:01.834 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:01.834 "strip_size_kb": 64, 00:33:01.834 "state": "online", 00:33:01.834 "raid_level": "raid5f", 00:33:01.834 "superblock": false, 00:33:01.834 "num_base_bdevs": 4, 00:33:01.834 "num_base_bdevs_discovered": 4, 00:33:01.834 "num_base_bdevs_operational": 4, 00:33:01.834 "process": { 00:33:01.834 "type": "rebuild", 00:33:01.834 "target": "spare", 00:33:01.834 "progress": { 00:33:01.834 "blocks": 107520, 00:33:01.834 "percent": 54 00:33:01.834 } 00:33:01.834 }, 00:33:01.834 "base_bdevs_list": [ 00:33:01.834 { 00:33:01.834 "name": "spare", 00:33:01.834 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:01.834 "is_configured": true, 00:33:01.834 "data_offset": 0, 00:33:01.834 "data_size": 65536 00:33:01.834 }, 00:33:01.834 { 00:33:01.834 "name": "BaseBdev2", 00:33:01.834 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:01.834 "is_configured": true, 00:33:01.834 "data_offset": 0, 00:33:01.834 "data_size": 65536 00:33:01.834 }, 00:33:01.834 { 00:33:01.834 "name": "BaseBdev3", 00:33:01.834 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:01.834 "is_configured": true, 00:33:01.834 "data_offset": 0, 00:33:01.834 "data_size": 65536 00:33:01.834 }, 00:33:01.834 { 00:33:01.834 "name": "BaseBdev4", 00:33:01.834 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:01.834 "is_configured": true, 00:33:01.834 "data_offset": 0, 00:33:01.834 "data_size": 65536 00:33:01.834 } 00:33:01.834 ] 00:33:01.834 }' 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:01.834 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:01.835 17:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:02.772 "name": "raid_bdev1", 00:33:02.772 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:02.772 "strip_size_kb": 64, 00:33:02.772 "state": "online", 00:33:02.772 "raid_level": "raid5f", 00:33:02.772 "superblock": false, 00:33:02.772 "num_base_bdevs": 4, 00:33:02.772 "num_base_bdevs_discovered": 4, 00:33:02.772 "num_base_bdevs_operational": 4, 00:33:02.772 "process": { 00:33:02.772 "type": "rebuild", 00:33:02.772 "target": "spare", 00:33:02.772 "progress": { 00:33:02.772 "blocks": 128640, 00:33:02.772 "percent": 65 00:33:02.772 } 00:33:02.772 }, 00:33:02.772 "base_bdevs_list": [ 00:33:02.772 { 00:33:02.772 "name": "spare", 00:33:02.772 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:02.772 "is_configured": true, 00:33:02.772 "data_offset": 0, 00:33:02.772 "data_size": 65536 00:33:02.772 }, 00:33:02.772 { 00:33:02.772 "name": "BaseBdev2", 00:33:02.772 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:02.772 "is_configured": true, 00:33:02.772 "data_offset": 0, 00:33:02.772 "data_size": 65536 00:33:02.772 }, 00:33:02.772 { 00:33:02.772 "name": "BaseBdev3", 00:33:02.772 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:02.772 "is_configured": true, 00:33:02.772 "data_offset": 0, 00:33:02.772 "data_size": 65536 00:33:02.772 }, 00:33:02.772 { 00:33:02.772 "name": "BaseBdev4", 00:33:02.772 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:02.772 "is_configured": true, 00:33:02.772 "data_offset": 0, 00:33:02.772 "data_size": 65536 00:33:02.772 } 00:33:02.772 ] 00:33:02.772 }' 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:02.772 17:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.147 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:04.148 "name": "raid_bdev1", 00:33:04.148 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:04.148 "strip_size_kb": 64, 00:33:04.148 "state": "online", 00:33:04.148 "raid_level": "raid5f", 00:33:04.148 "superblock": false, 00:33:04.148 "num_base_bdevs": 4, 00:33:04.148 "num_base_bdevs_discovered": 4, 00:33:04.148 "num_base_bdevs_operational": 4, 00:33:04.148 "process": { 00:33:04.148 "type": "rebuild", 00:33:04.148 "target": "spare", 00:33:04.148 "progress": { 00:33:04.148 "blocks": 151680, 00:33:04.148 "percent": 77 00:33:04.148 } 00:33:04.148 }, 00:33:04.148 "base_bdevs_list": [ 00:33:04.148 { 00:33:04.148 "name": "spare", 00:33:04.148 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:04.148 "is_configured": true, 00:33:04.148 "data_offset": 0, 00:33:04.148 "data_size": 65536 00:33:04.148 }, 00:33:04.148 { 00:33:04.148 "name": "BaseBdev2", 00:33:04.148 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:04.148 "is_configured": true, 00:33:04.148 "data_offset": 0, 00:33:04.148 "data_size": 65536 00:33:04.148 }, 00:33:04.148 { 00:33:04.148 "name": "BaseBdev3", 00:33:04.148 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:04.148 "is_configured": true, 00:33:04.148 "data_offset": 0, 00:33:04.148 "data_size": 65536 00:33:04.148 }, 00:33:04.148 { 00:33:04.148 "name": "BaseBdev4", 00:33:04.148 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:04.148 "is_configured": true, 00:33:04.148 "data_offset": 0, 00:33:04.148 "data_size": 65536 00:33:04.148 } 00:33:04.148 ] 00:33:04.148 }' 00:33:04.148 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:04.148 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:04.148 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:04.148 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:04.148 17:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.082 17:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:05.082 "name": "raid_bdev1", 00:33:05.082 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:05.082 "strip_size_kb": 64, 00:33:05.082 "state": "online", 00:33:05.082 "raid_level": "raid5f", 00:33:05.082 "superblock": false, 00:33:05.082 "num_base_bdevs": 4, 00:33:05.082 "num_base_bdevs_discovered": 4, 00:33:05.082 "num_base_bdevs_operational": 4, 00:33:05.082 "process": { 00:33:05.082 "type": "rebuild", 00:33:05.082 "target": "spare", 00:33:05.082 "progress": { 00:33:05.082 "blocks": 172800, 00:33:05.082 "percent": 87 00:33:05.082 } 00:33:05.082 }, 00:33:05.082 "base_bdevs_list": [ 00:33:05.082 { 00:33:05.082 "name": "spare", 00:33:05.082 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:05.082 "is_configured": true, 00:33:05.082 "data_offset": 0, 00:33:05.082 "data_size": 65536 00:33:05.082 }, 00:33:05.082 { 00:33:05.082 "name": "BaseBdev2", 00:33:05.082 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:05.082 "is_configured": true, 00:33:05.082 "data_offset": 0, 00:33:05.082 "data_size": 65536 00:33:05.082 }, 00:33:05.082 { 00:33:05.082 "name": "BaseBdev3", 00:33:05.082 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:05.082 "is_configured": true, 00:33:05.082 "data_offset": 0, 00:33:05.082 "data_size": 65536 00:33:05.082 }, 00:33:05.082 { 00:33:05.082 "name": "BaseBdev4", 00:33:05.082 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:05.082 "is_configured": true, 00:33:05.082 "data_offset": 0, 00:33:05.082 "data_size": 65536 00:33:05.082 } 00:33:05.082 ] 00:33:05.082 }' 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:05.082 17:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:06.525 "name": "raid_bdev1", 00:33:06.525 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:06.525 "strip_size_kb": 64, 00:33:06.525 "state": "online", 00:33:06.525 "raid_level": "raid5f", 00:33:06.525 "superblock": false, 00:33:06.525 "num_base_bdevs": 4, 00:33:06.525 "num_base_bdevs_discovered": 4, 00:33:06.525 "num_base_bdevs_operational": 4, 00:33:06.525 "process": { 00:33:06.525 "type": "rebuild", 00:33:06.525 "target": "spare", 00:33:06.525 "progress": { 00:33:06.525 "blocks": 195840, 00:33:06.525 "percent": 99 00:33:06.525 } 00:33:06.525 }, 00:33:06.525 "base_bdevs_list": [ 00:33:06.525 { 00:33:06.525 "name": "spare", 00:33:06.525 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:06.525 "is_configured": true, 00:33:06.525 "data_offset": 0, 00:33:06.525 "data_size": 65536 00:33:06.525 }, 00:33:06.525 { 00:33:06.525 "name": "BaseBdev2", 00:33:06.525 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:06.525 "is_configured": true, 00:33:06.525 "data_offset": 0, 00:33:06.525 "data_size": 65536 00:33:06.525 }, 00:33:06.525 { 00:33:06.525 "name": "BaseBdev3", 00:33:06.525 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:06.525 "is_configured": true, 00:33:06.525 "data_offset": 0, 00:33:06.525 "data_size": 65536 00:33:06.525 }, 00:33:06.525 { 00:33:06.525 "name": "BaseBdev4", 00:33:06.525 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:06.525 "is_configured": true, 00:33:06.525 "data_offset": 0, 00:33:06.525 "data_size": 65536 00:33:06.525 } 00:33:06.525 ] 00:33:06.525 }' 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:06.525 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:06.526 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:06.526 [2024-11-26 17:29:36.240204] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:06.526 [2024-11-26 17:29:36.240302] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:06.526 [2024-11-26 17:29:36.240363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.526 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:06.526 17:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:07.461 "name": "raid_bdev1", 00:33:07.461 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:07.461 "strip_size_kb": 64, 00:33:07.461 "state": "online", 00:33:07.461 "raid_level": "raid5f", 00:33:07.461 "superblock": false, 00:33:07.461 "num_base_bdevs": 4, 00:33:07.461 "num_base_bdevs_discovered": 4, 00:33:07.461 "num_base_bdevs_operational": 4, 00:33:07.461 "base_bdevs_list": [ 00:33:07.461 { 00:33:07.461 "name": "spare", 00:33:07.461 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:07.461 "is_configured": true, 00:33:07.461 "data_offset": 0, 00:33:07.461 "data_size": 65536 00:33:07.461 }, 00:33:07.461 { 00:33:07.461 "name": "BaseBdev2", 00:33:07.461 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:07.461 "is_configured": true, 00:33:07.461 "data_offset": 0, 00:33:07.461 "data_size": 65536 00:33:07.461 }, 00:33:07.461 { 00:33:07.461 "name": "BaseBdev3", 00:33:07.461 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:07.461 "is_configured": true, 00:33:07.461 "data_offset": 0, 00:33:07.461 "data_size": 65536 00:33:07.461 }, 00:33:07.461 { 00:33:07.461 "name": "BaseBdev4", 00:33:07.461 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:07.461 "is_configured": true, 00:33:07.461 "data_offset": 0, 00:33:07.461 "data_size": 65536 00:33:07.461 } 00:33:07.461 ] 00:33:07.461 }' 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:07.461 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:07.462 "name": "raid_bdev1", 00:33:07.462 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:07.462 "strip_size_kb": 64, 00:33:07.462 "state": "online", 00:33:07.462 "raid_level": "raid5f", 00:33:07.462 "superblock": false, 00:33:07.462 "num_base_bdevs": 4, 00:33:07.462 "num_base_bdevs_discovered": 4, 00:33:07.462 "num_base_bdevs_operational": 4, 00:33:07.462 "base_bdevs_list": [ 00:33:07.462 { 00:33:07.462 "name": "spare", 00:33:07.462 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:07.462 "is_configured": true, 00:33:07.462 "data_offset": 0, 00:33:07.462 "data_size": 65536 00:33:07.462 }, 00:33:07.462 { 00:33:07.462 "name": "BaseBdev2", 00:33:07.462 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:07.462 "is_configured": true, 00:33:07.462 "data_offset": 0, 00:33:07.462 "data_size": 65536 00:33:07.462 }, 00:33:07.462 { 00:33:07.462 "name": "BaseBdev3", 00:33:07.462 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:07.462 "is_configured": true, 00:33:07.462 "data_offset": 0, 00:33:07.462 "data_size": 65536 00:33:07.462 }, 00:33:07.462 { 00:33:07.462 "name": "BaseBdev4", 00:33:07.462 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:07.462 "is_configured": true, 00:33:07.462 "data_offset": 0, 00:33:07.462 "data_size": 65536 00:33:07.462 } 00:33:07.462 ] 00:33:07.462 }' 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.462 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.720 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.720 "name": "raid_bdev1", 00:33:07.720 "uuid": "38b8e479-c77c-4433-b807-88b856cc905e", 00:33:07.720 "strip_size_kb": 64, 00:33:07.720 "state": "online", 00:33:07.720 "raid_level": "raid5f", 00:33:07.720 "superblock": false, 00:33:07.720 "num_base_bdevs": 4, 00:33:07.720 "num_base_bdevs_discovered": 4, 00:33:07.720 "num_base_bdevs_operational": 4, 00:33:07.720 "base_bdevs_list": [ 00:33:07.720 { 00:33:07.720 "name": "spare", 00:33:07.720 "uuid": "b8191417-020d-592f-8e88-827fbb640589", 00:33:07.720 "is_configured": true, 00:33:07.720 "data_offset": 0, 00:33:07.720 "data_size": 65536 00:33:07.720 }, 00:33:07.720 { 00:33:07.720 "name": "BaseBdev2", 00:33:07.720 "uuid": "cbcf234b-8c4a-5de7-a77b-e9acdfc90d79", 00:33:07.720 "is_configured": true, 00:33:07.720 "data_offset": 0, 00:33:07.720 "data_size": 65536 00:33:07.720 }, 00:33:07.720 { 00:33:07.720 "name": "BaseBdev3", 00:33:07.720 "uuid": "42348aad-d844-5283-ae99-fe1d368ba4e7", 00:33:07.720 "is_configured": true, 00:33:07.720 "data_offset": 0, 00:33:07.720 "data_size": 65536 00:33:07.720 }, 00:33:07.720 { 00:33:07.720 "name": "BaseBdev4", 00:33:07.720 "uuid": "d727ba99-6c47-5732-928e-1595acf3ccbb", 00:33:07.720 "is_configured": true, 00:33:07.720 "data_offset": 0, 00:33:07.720 "data_size": 65536 00:33:07.720 } 00:33:07.720 ] 00:33:07.720 }' 00:33:07.720 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.720 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:07.980 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.980 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.980 [2024-11-26 17:29:37.991423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:07.980 [2024-11-26 17:29:37.991684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:07.980 [2024-11-26 17:29:37.991914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.980 [2024-11-26 17:29:37.992163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.980 [2024-11-26 17:29:37.992191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:07.980 17:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.980 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.980 17:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:07.980 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:08.239 /dev/nbd0 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:08.239 1+0 records in 00:33:08.239 1+0 records out 00:33:08.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340313 s, 12.0 MB/s 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:08.239 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:08.498 /dev/nbd1 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:08.498 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:08.498 1+0 records in 00:33:08.498 1+0 records out 00:33:08.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424417 s, 9.7 MB/s 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:08.757 17:29:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:09.016 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84767 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84767 ']' 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84767 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.275 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84767 00:33:09.534 killing process with pid 84767 00:33:09.534 Received shutdown signal, test time was about 60.000000 seconds 00:33:09.534 00:33:09.534 Latency(us) 00:33:09.534 [2024-11-26T17:29:39.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.534 [2024-11-26T17:29:39.648Z] =================================================================================================================== 00:33:09.534 [2024-11-26T17:29:39.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:09.534 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:09.534 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:09.534 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84767' 00:33:09.534 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84767 00:33:09.534 [2024-11-26 17:29:39.397590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:09.534 17:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84767 00:33:10.101 [2024-11-26 17:29:39.908032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:33:11.476 00:33:11.476 real 0m20.647s 00:33:11.476 user 0m24.528s 00:33:11.476 sys 0m2.683s 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.476 ************************************ 00:33:11.476 END TEST raid5f_rebuild_test 00:33:11.476 ************************************ 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.476 17:29:41 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:33:11.476 17:29:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:11.476 17:29:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.476 17:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:11.476 ************************************ 00:33:11.476 START TEST raid5f_rebuild_test_sb 00:33:11.476 ************************************ 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:33:11.476 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85289 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85289 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85289 ']' 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.477 17:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.477 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:11.477 Zero copy mechanism will not be used. 00:33:11.477 [2024-11-26 17:29:41.375695] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:33:11.477 [2024-11-26 17:29:41.375836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85289 ] 00:33:11.477 [2024-11-26 17:29:41.548801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.735 [2024-11-26 17:29:41.701538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.993 [2024-11-26 17:29:41.940528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:11.993 [2024-11-26 17:29:41.940598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.253 BaseBdev1_malloc 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.253 [2024-11-26 17:29:42.301785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:12.253 [2024-11-26 17:29:42.301861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.253 [2024-11-26 17:29:42.301889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:12.253 [2024-11-26 17:29:42.301906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.253 [2024-11-26 17:29:42.304636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.253 [2024-11-26 17:29:42.304681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:12.253 BaseBdev1 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.253 BaseBdev2_malloc 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.253 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.253 [2024-11-26 17:29:42.359178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:12.253 [2024-11-26 17:29:42.359374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.254 [2024-11-26 17:29:42.359411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:12.254 [2024-11-26 17:29:42.359428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.254 [2024-11-26 17:29:42.362223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.254 [2024-11-26 17:29:42.362270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:12.254 BaseBdev2 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.513 BaseBdev3_malloc 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.513 [2024-11-26 17:29:42.431036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:12.513 [2024-11-26 17:29:42.431227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.513 [2024-11-26 17:29:42.431264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:12.513 [2024-11-26 17:29:42.431281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.513 [2024-11-26 17:29:42.434060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.513 [2024-11-26 17:29:42.434107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:12.513 BaseBdev3 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.513 BaseBdev4_malloc 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.513 [2024-11-26 17:29:42.488869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:12.513 [2024-11-26 17:29:42.488944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.513 [2024-11-26 17:29:42.488970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:12.513 [2024-11-26 17:29:42.488986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.513 [2024-11-26 17:29:42.491706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.513 [2024-11-26 17:29:42.491755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:12.513 BaseBdev4 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.513 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.513 spare_malloc 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.514 spare_delay 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.514 [2024-11-26 17:29:42.560872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:12.514 [2024-11-26 17:29:42.560935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.514 [2024-11-26 17:29:42.560958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:12.514 [2024-11-26 17:29:42.560974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.514 [2024-11-26 17:29:42.563666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.514 [2024-11-26 17:29:42.563710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:12.514 spare 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.514 [2024-11-26 17:29:42.572913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:12.514 [2024-11-26 17:29:42.575391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:12.514 [2024-11-26 17:29:42.575643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:12.514 [2024-11-26 17:29:42.575712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:12.514 [2024-11-26 17:29:42.575935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:12.514 [2024-11-26 17:29:42.575955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:12.514 [2024-11-26 17:29:42.576248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:12.514 [2024-11-26 17:29:42.584918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:12.514 [2024-11-26 17:29:42.584946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:12.514 [2024-11-26 17:29:42.585178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.514 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.774 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.774 "name": "raid_bdev1", 00:33:12.774 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:12.774 "strip_size_kb": 64, 00:33:12.774 "state": "online", 00:33:12.774 "raid_level": "raid5f", 00:33:12.774 "superblock": true, 00:33:12.774 "num_base_bdevs": 4, 00:33:12.774 "num_base_bdevs_discovered": 4, 00:33:12.774 "num_base_bdevs_operational": 4, 00:33:12.774 "base_bdevs_list": [ 00:33:12.774 { 00:33:12.774 "name": "BaseBdev1", 00:33:12.774 "uuid": "3245feb0-892c-5e08-bcd5-9f140fed55aa", 00:33:12.774 "is_configured": true, 00:33:12.774 "data_offset": 2048, 00:33:12.774 "data_size": 63488 00:33:12.774 }, 00:33:12.774 { 00:33:12.774 "name": "BaseBdev2", 00:33:12.774 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:12.774 "is_configured": true, 00:33:12.774 "data_offset": 2048, 00:33:12.774 "data_size": 63488 00:33:12.774 }, 00:33:12.774 { 00:33:12.774 "name": "BaseBdev3", 00:33:12.774 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:12.774 "is_configured": true, 00:33:12.774 "data_offset": 2048, 00:33:12.774 "data_size": 63488 00:33:12.774 }, 00:33:12.774 { 00:33:12.774 "name": "BaseBdev4", 00:33:12.774 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:12.774 "is_configured": true, 00:33:12.774 "data_offset": 2048, 00:33:12.774 "data_size": 63488 00:33:12.774 } 00:33:12.774 ] 00:33:12.774 }' 00:33:12.774 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.774 17:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.033 [2024-11-26 17:29:43.014678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:13.033 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:13.291 [2024-11-26 17:29:43.314092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:13.291 /dev/nbd0 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:13.291 1+0 records in 00:33:13.291 1+0 records out 00:33:13.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040058 s, 10.2 MB/s 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:33:13.291 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:33:13.859 496+0 records in 00:33:13.859 496+0 records out 00:33:13.859 97517568 bytes (98 MB, 93 MiB) copied, 0.546709 s, 178 MB/s 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:13.859 17:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:14.118 [2024-11-26 17:29:44.195295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.118 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.119 [2024-11-26 17:29:44.217904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.119 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.377 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.377 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.377 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.377 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.377 "name": "raid_bdev1", 00:33:14.377 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:14.377 "strip_size_kb": 64, 00:33:14.377 "state": "online", 00:33:14.377 "raid_level": "raid5f", 00:33:14.377 "superblock": true, 00:33:14.378 "num_base_bdevs": 4, 00:33:14.378 "num_base_bdevs_discovered": 3, 00:33:14.378 "num_base_bdevs_operational": 3, 00:33:14.378 "base_bdevs_list": [ 00:33:14.378 { 00:33:14.378 "name": null, 00:33:14.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.378 "is_configured": false, 00:33:14.378 "data_offset": 0, 00:33:14.378 "data_size": 63488 00:33:14.378 }, 00:33:14.378 { 00:33:14.378 "name": "BaseBdev2", 00:33:14.378 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:14.378 "is_configured": true, 00:33:14.378 "data_offset": 2048, 00:33:14.378 "data_size": 63488 00:33:14.378 }, 00:33:14.378 { 00:33:14.378 "name": "BaseBdev3", 00:33:14.378 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:14.378 "is_configured": true, 00:33:14.378 "data_offset": 2048, 00:33:14.378 "data_size": 63488 00:33:14.378 }, 00:33:14.378 { 00:33:14.378 "name": "BaseBdev4", 00:33:14.378 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:14.378 "is_configured": true, 00:33:14.378 "data_offset": 2048, 00:33:14.378 "data_size": 63488 00:33:14.378 } 00:33:14.378 ] 00:33:14.378 }' 00:33:14.378 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.378 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.636 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:14.636 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.636 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.636 [2024-11-26 17:29:44.681842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:14.636 [2024-11-26 17:29:44.701426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:33:14.636 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.636 17:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:14.636 [2024-11-26 17:29:44.713854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:16.014 "name": "raid_bdev1", 00:33:16.014 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:16.014 "strip_size_kb": 64, 00:33:16.014 "state": "online", 00:33:16.014 "raid_level": "raid5f", 00:33:16.014 "superblock": true, 00:33:16.014 "num_base_bdevs": 4, 00:33:16.014 "num_base_bdevs_discovered": 4, 00:33:16.014 "num_base_bdevs_operational": 4, 00:33:16.014 "process": { 00:33:16.014 "type": "rebuild", 00:33:16.014 "target": "spare", 00:33:16.014 "progress": { 00:33:16.014 "blocks": 17280, 00:33:16.014 "percent": 9 00:33:16.014 } 00:33:16.014 }, 00:33:16.014 "base_bdevs_list": [ 00:33:16.014 { 00:33:16.014 "name": "spare", 00:33:16.014 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:16.014 "is_configured": true, 00:33:16.014 "data_offset": 2048, 00:33:16.014 "data_size": 63488 00:33:16.014 }, 00:33:16.014 { 00:33:16.014 "name": "BaseBdev2", 00:33:16.014 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:16.014 "is_configured": true, 00:33:16.014 "data_offset": 2048, 00:33:16.014 "data_size": 63488 00:33:16.014 }, 00:33:16.014 { 00:33:16.014 "name": "BaseBdev3", 00:33:16.014 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:16.014 "is_configured": true, 00:33:16.014 "data_offset": 2048, 00:33:16.014 "data_size": 63488 00:33:16.014 }, 00:33:16.014 { 00:33:16.014 "name": "BaseBdev4", 00:33:16.014 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:16.014 "is_configured": true, 00:33:16.014 "data_offset": 2048, 00:33:16.014 "data_size": 63488 00:33:16.014 } 00:33:16.014 ] 00:33:16.014 }' 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.014 [2024-11-26 17:29:45.841975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:16.014 [2024-11-26 17:29:45.924250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:16.014 [2024-11-26 17:29:45.924369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.014 [2024-11-26 17:29:45.924391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:16.014 [2024-11-26 17:29:45.924405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.014 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.015 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.015 17:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.015 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.015 "name": "raid_bdev1", 00:33:16.015 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:16.015 "strip_size_kb": 64, 00:33:16.015 "state": "online", 00:33:16.015 "raid_level": "raid5f", 00:33:16.015 "superblock": true, 00:33:16.015 "num_base_bdevs": 4, 00:33:16.015 "num_base_bdevs_discovered": 3, 00:33:16.015 "num_base_bdevs_operational": 3, 00:33:16.015 "base_bdevs_list": [ 00:33:16.015 { 00:33:16.015 "name": null, 00:33:16.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.015 "is_configured": false, 00:33:16.015 "data_offset": 0, 00:33:16.015 "data_size": 63488 00:33:16.015 }, 00:33:16.015 { 00:33:16.015 "name": "BaseBdev2", 00:33:16.015 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:16.015 "is_configured": true, 00:33:16.015 "data_offset": 2048, 00:33:16.015 "data_size": 63488 00:33:16.015 }, 00:33:16.015 { 00:33:16.015 "name": "BaseBdev3", 00:33:16.015 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:16.015 "is_configured": true, 00:33:16.015 "data_offset": 2048, 00:33:16.015 "data_size": 63488 00:33:16.015 }, 00:33:16.015 { 00:33:16.015 "name": "BaseBdev4", 00:33:16.015 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:16.015 "is_configured": true, 00:33:16.015 "data_offset": 2048, 00:33:16.015 "data_size": 63488 00:33:16.015 } 00:33:16.015 ] 00:33:16.015 }' 00:33:16.015 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.015 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:16.583 "name": "raid_bdev1", 00:33:16.583 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:16.583 "strip_size_kb": 64, 00:33:16.583 "state": "online", 00:33:16.583 "raid_level": "raid5f", 00:33:16.583 "superblock": true, 00:33:16.583 "num_base_bdevs": 4, 00:33:16.583 "num_base_bdevs_discovered": 3, 00:33:16.583 "num_base_bdevs_operational": 3, 00:33:16.583 "base_bdevs_list": [ 00:33:16.583 { 00:33:16.583 "name": null, 00:33:16.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.583 "is_configured": false, 00:33:16.583 "data_offset": 0, 00:33:16.583 "data_size": 63488 00:33:16.583 }, 00:33:16.583 { 00:33:16.583 "name": "BaseBdev2", 00:33:16.583 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:16.583 "is_configured": true, 00:33:16.583 "data_offset": 2048, 00:33:16.583 "data_size": 63488 00:33:16.583 }, 00:33:16.583 { 00:33:16.583 "name": "BaseBdev3", 00:33:16.583 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:16.583 "is_configured": true, 00:33:16.583 "data_offset": 2048, 00:33:16.583 "data_size": 63488 00:33:16.583 }, 00:33:16.583 { 00:33:16.583 "name": "BaseBdev4", 00:33:16.583 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:16.583 "is_configured": true, 00:33:16.583 "data_offset": 2048, 00:33:16.583 "data_size": 63488 00:33:16.583 } 00:33:16.583 ] 00:33:16.583 }' 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.583 [2024-11-26 17:29:46.549820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:16.583 [2024-11-26 17:29:46.569081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.583 17:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:16.583 [2024-11-26 17:29:46.582900] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.519 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.519 "name": "raid_bdev1", 00:33:17.519 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:17.519 "strip_size_kb": 64, 00:33:17.519 "state": "online", 00:33:17.519 "raid_level": "raid5f", 00:33:17.519 "superblock": true, 00:33:17.520 "num_base_bdevs": 4, 00:33:17.520 "num_base_bdevs_discovered": 4, 00:33:17.520 "num_base_bdevs_operational": 4, 00:33:17.520 "process": { 00:33:17.520 "type": "rebuild", 00:33:17.520 "target": "spare", 00:33:17.520 "progress": { 00:33:17.520 "blocks": 17280, 00:33:17.520 "percent": 9 00:33:17.520 } 00:33:17.520 }, 00:33:17.520 "base_bdevs_list": [ 00:33:17.520 { 00:33:17.520 "name": "spare", 00:33:17.520 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:17.520 "is_configured": true, 00:33:17.520 "data_offset": 2048, 00:33:17.520 "data_size": 63488 00:33:17.520 }, 00:33:17.520 { 00:33:17.520 "name": "BaseBdev2", 00:33:17.520 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:17.520 "is_configured": true, 00:33:17.520 "data_offset": 2048, 00:33:17.520 "data_size": 63488 00:33:17.520 }, 00:33:17.520 { 00:33:17.520 "name": "BaseBdev3", 00:33:17.520 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:17.520 "is_configured": true, 00:33:17.520 "data_offset": 2048, 00:33:17.520 "data_size": 63488 00:33:17.520 }, 00:33:17.520 { 00:33:17.520 "name": "BaseBdev4", 00:33:17.520 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:17.520 "is_configured": true, 00:33:17.520 "data_offset": 2048, 00:33:17.520 "data_size": 63488 00:33:17.520 } 00:33:17.520 ] 00:33:17.520 }' 00:33:17.520 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:17.779 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=653 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.779 "name": "raid_bdev1", 00:33:17.779 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:17.779 "strip_size_kb": 64, 00:33:17.779 "state": "online", 00:33:17.779 "raid_level": "raid5f", 00:33:17.779 "superblock": true, 00:33:17.779 "num_base_bdevs": 4, 00:33:17.779 "num_base_bdevs_discovered": 4, 00:33:17.779 "num_base_bdevs_operational": 4, 00:33:17.779 "process": { 00:33:17.779 "type": "rebuild", 00:33:17.779 "target": "spare", 00:33:17.779 "progress": { 00:33:17.779 "blocks": 21120, 00:33:17.779 "percent": 11 00:33:17.779 } 00:33:17.779 }, 00:33:17.779 "base_bdevs_list": [ 00:33:17.779 { 00:33:17.779 "name": "spare", 00:33:17.779 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:17.779 "is_configured": true, 00:33:17.779 "data_offset": 2048, 00:33:17.779 "data_size": 63488 00:33:17.779 }, 00:33:17.779 { 00:33:17.779 "name": "BaseBdev2", 00:33:17.779 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:17.779 "is_configured": true, 00:33:17.779 "data_offset": 2048, 00:33:17.779 "data_size": 63488 00:33:17.779 }, 00:33:17.779 { 00:33:17.779 "name": "BaseBdev3", 00:33:17.779 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:17.779 "is_configured": true, 00:33:17.779 "data_offset": 2048, 00:33:17.779 "data_size": 63488 00:33:17.779 }, 00:33:17.779 { 00:33:17.779 "name": "BaseBdev4", 00:33:17.779 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:17.779 "is_configured": true, 00:33:17.779 "data_offset": 2048, 00:33:17.779 "data_size": 63488 00:33:17.779 } 00:33:17.779 ] 00:33:17.779 }' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.779 17:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.164 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.165 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.165 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.165 "name": "raid_bdev1", 00:33:19.165 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:19.165 "strip_size_kb": 64, 00:33:19.165 "state": "online", 00:33:19.165 "raid_level": "raid5f", 00:33:19.165 "superblock": true, 00:33:19.165 "num_base_bdevs": 4, 00:33:19.165 "num_base_bdevs_discovered": 4, 00:33:19.165 "num_base_bdevs_operational": 4, 00:33:19.165 "process": { 00:33:19.165 "type": "rebuild", 00:33:19.165 "target": "spare", 00:33:19.165 "progress": { 00:33:19.165 "blocks": 42240, 00:33:19.165 "percent": 22 00:33:19.165 } 00:33:19.165 }, 00:33:19.165 "base_bdevs_list": [ 00:33:19.165 { 00:33:19.165 "name": "spare", 00:33:19.165 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:19.165 "is_configured": true, 00:33:19.165 "data_offset": 2048, 00:33:19.165 "data_size": 63488 00:33:19.165 }, 00:33:19.165 { 00:33:19.165 "name": "BaseBdev2", 00:33:19.165 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:19.165 "is_configured": true, 00:33:19.165 "data_offset": 2048, 00:33:19.165 "data_size": 63488 00:33:19.165 }, 00:33:19.165 { 00:33:19.165 "name": "BaseBdev3", 00:33:19.165 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:19.165 "is_configured": true, 00:33:19.165 "data_offset": 2048, 00:33:19.165 "data_size": 63488 00:33:19.165 }, 00:33:19.165 { 00:33:19.165 "name": "BaseBdev4", 00:33:19.165 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:19.165 "is_configured": true, 00:33:19.165 "data_offset": 2048, 00:33:19.165 "data_size": 63488 00:33:19.165 } 00:33:19.165 ] 00:33:19.165 }' 00:33:19.165 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:19.165 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.165 17:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:19.165 17:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.165 17:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.104 "name": "raid_bdev1", 00:33:20.104 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:20.104 "strip_size_kb": 64, 00:33:20.104 "state": "online", 00:33:20.104 "raid_level": "raid5f", 00:33:20.104 "superblock": true, 00:33:20.104 "num_base_bdevs": 4, 00:33:20.104 "num_base_bdevs_discovered": 4, 00:33:20.104 "num_base_bdevs_operational": 4, 00:33:20.104 "process": { 00:33:20.104 "type": "rebuild", 00:33:20.104 "target": "spare", 00:33:20.104 "progress": { 00:33:20.104 "blocks": 65280, 00:33:20.104 "percent": 34 00:33:20.104 } 00:33:20.104 }, 00:33:20.104 "base_bdevs_list": [ 00:33:20.104 { 00:33:20.104 "name": "spare", 00:33:20.104 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:20.104 "is_configured": true, 00:33:20.104 "data_offset": 2048, 00:33:20.104 "data_size": 63488 00:33:20.104 }, 00:33:20.104 { 00:33:20.104 "name": "BaseBdev2", 00:33:20.104 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:20.104 "is_configured": true, 00:33:20.104 "data_offset": 2048, 00:33:20.104 "data_size": 63488 00:33:20.104 }, 00:33:20.104 { 00:33:20.104 "name": "BaseBdev3", 00:33:20.104 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:20.104 "is_configured": true, 00:33:20.104 "data_offset": 2048, 00:33:20.104 "data_size": 63488 00:33:20.104 }, 00:33:20.104 { 00:33:20.104 "name": "BaseBdev4", 00:33:20.104 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:20.104 "is_configured": true, 00:33:20.104 "data_offset": 2048, 00:33:20.104 "data_size": 63488 00:33:20.104 } 00:33:20.104 ] 00:33:20.104 }' 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.104 17:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.480 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.480 "name": "raid_bdev1", 00:33:21.480 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:21.480 "strip_size_kb": 64, 00:33:21.480 "state": "online", 00:33:21.480 "raid_level": "raid5f", 00:33:21.480 "superblock": true, 00:33:21.480 "num_base_bdevs": 4, 00:33:21.480 "num_base_bdevs_discovered": 4, 00:33:21.480 "num_base_bdevs_operational": 4, 00:33:21.480 "process": { 00:33:21.480 "type": "rebuild", 00:33:21.480 "target": "spare", 00:33:21.480 "progress": { 00:33:21.480 "blocks": 86400, 00:33:21.480 "percent": 45 00:33:21.480 } 00:33:21.480 }, 00:33:21.480 "base_bdevs_list": [ 00:33:21.480 { 00:33:21.480 "name": "spare", 00:33:21.480 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:21.481 "is_configured": true, 00:33:21.481 "data_offset": 2048, 00:33:21.481 "data_size": 63488 00:33:21.481 }, 00:33:21.481 { 00:33:21.481 "name": "BaseBdev2", 00:33:21.481 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:21.481 "is_configured": true, 00:33:21.481 "data_offset": 2048, 00:33:21.481 "data_size": 63488 00:33:21.481 }, 00:33:21.481 { 00:33:21.481 "name": "BaseBdev3", 00:33:21.481 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:21.481 "is_configured": true, 00:33:21.481 "data_offset": 2048, 00:33:21.481 "data_size": 63488 00:33:21.481 }, 00:33:21.481 { 00:33:21.481 "name": "BaseBdev4", 00:33:21.481 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:21.481 "is_configured": true, 00:33:21.481 "data_offset": 2048, 00:33:21.481 "data_size": 63488 00:33:21.481 } 00:33:21.481 ] 00:33:21.481 }' 00:33:21.481 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.481 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.481 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.481 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.481 17:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.416 "name": "raid_bdev1", 00:33:22.416 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:22.416 "strip_size_kb": 64, 00:33:22.416 "state": "online", 00:33:22.416 "raid_level": "raid5f", 00:33:22.416 "superblock": true, 00:33:22.416 "num_base_bdevs": 4, 00:33:22.416 "num_base_bdevs_discovered": 4, 00:33:22.416 "num_base_bdevs_operational": 4, 00:33:22.416 "process": { 00:33:22.416 "type": "rebuild", 00:33:22.416 "target": "spare", 00:33:22.416 "progress": { 00:33:22.416 "blocks": 107520, 00:33:22.416 "percent": 56 00:33:22.416 } 00:33:22.416 }, 00:33:22.416 "base_bdevs_list": [ 00:33:22.416 { 00:33:22.416 "name": "spare", 00:33:22.416 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:22.416 "is_configured": true, 00:33:22.416 "data_offset": 2048, 00:33:22.416 "data_size": 63488 00:33:22.416 }, 00:33:22.416 { 00:33:22.416 "name": "BaseBdev2", 00:33:22.416 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:22.416 "is_configured": true, 00:33:22.416 "data_offset": 2048, 00:33:22.416 "data_size": 63488 00:33:22.416 }, 00:33:22.416 { 00:33:22.416 "name": "BaseBdev3", 00:33:22.416 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:22.416 "is_configured": true, 00:33:22.416 "data_offset": 2048, 00:33:22.416 "data_size": 63488 00:33:22.416 }, 00:33:22.416 { 00:33:22.416 "name": "BaseBdev4", 00:33:22.416 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:22.416 "is_configured": true, 00:33:22.416 "data_offset": 2048, 00:33:22.416 "data_size": 63488 00:33:22.416 } 00:33:22.416 ] 00:33:22.416 }' 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.416 17:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.352 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.610 "name": "raid_bdev1", 00:33:23.610 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:23.610 "strip_size_kb": 64, 00:33:23.610 "state": "online", 00:33:23.610 "raid_level": "raid5f", 00:33:23.610 "superblock": true, 00:33:23.610 "num_base_bdevs": 4, 00:33:23.610 "num_base_bdevs_discovered": 4, 00:33:23.610 "num_base_bdevs_operational": 4, 00:33:23.610 "process": { 00:33:23.610 "type": "rebuild", 00:33:23.610 "target": "spare", 00:33:23.610 "progress": { 00:33:23.610 "blocks": 130560, 00:33:23.610 "percent": 68 00:33:23.610 } 00:33:23.610 }, 00:33:23.610 "base_bdevs_list": [ 00:33:23.610 { 00:33:23.610 "name": "spare", 00:33:23.610 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:23.610 "is_configured": true, 00:33:23.610 "data_offset": 2048, 00:33:23.610 "data_size": 63488 00:33:23.610 }, 00:33:23.610 { 00:33:23.610 "name": "BaseBdev2", 00:33:23.610 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:23.610 "is_configured": true, 00:33:23.610 "data_offset": 2048, 00:33:23.610 "data_size": 63488 00:33:23.610 }, 00:33:23.610 { 00:33:23.610 "name": "BaseBdev3", 00:33:23.610 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:23.610 "is_configured": true, 00:33:23.610 "data_offset": 2048, 00:33:23.610 "data_size": 63488 00:33:23.610 }, 00:33:23.610 { 00:33:23.610 "name": "BaseBdev4", 00:33:23.610 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:23.610 "is_configured": true, 00:33:23.610 "data_offset": 2048, 00:33:23.610 "data_size": 63488 00:33:23.610 } 00:33:23.610 ] 00:33:23.610 }' 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.610 17:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:24.547 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:24.547 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:24.547 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:24.547 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:24.547 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:24.548 "name": "raid_bdev1", 00:33:24.548 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:24.548 "strip_size_kb": 64, 00:33:24.548 "state": "online", 00:33:24.548 "raid_level": "raid5f", 00:33:24.548 "superblock": true, 00:33:24.548 "num_base_bdevs": 4, 00:33:24.548 "num_base_bdevs_discovered": 4, 00:33:24.548 "num_base_bdevs_operational": 4, 00:33:24.548 "process": { 00:33:24.548 "type": "rebuild", 00:33:24.548 "target": "spare", 00:33:24.548 "progress": { 00:33:24.548 "blocks": 151680, 00:33:24.548 "percent": 79 00:33:24.548 } 00:33:24.548 }, 00:33:24.548 "base_bdevs_list": [ 00:33:24.548 { 00:33:24.548 "name": "spare", 00:33:24.548 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:24.548 "is_configured": true, 00:33:24.548 "data_offset": 2048, 00:33:24.548 "data_size": 63488 00:33:24.548 }, 00:33:24.548 { 00:33:24.548 "name": "BaseBdev2", 00:33:24.548 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:24.548 "is_configured": true, 00:33:24.548 "data_offset": 2048, 00:33:24.548 "data_size": 63488 00:33:24.548 }, 00:33:24.548 { 00:33:24.548 "name": "BaseBdev3", 00:33:24.548 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:24.548 "is_configured": true, 00:33:24.548 "data_offset": 2048, 00:33:24.548 "data_size": 63488 00:33:24.548 }, 00:33:24.548 { 00:33:24.548 "name": "BaseBdev4", 00:33:24.548 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:24.548 "is_configured": true, 00:33:24.548 "data_offset": 2048, 00:33:24.548 "data_size": 63488 00:33:24.548 } 00:33:24.548 ] 00:33:24.548 }' 00:33:24.548 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:24.806 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:24.806 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:24.806 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:24.806 17:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.740 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:25.740 "name": "raid_bdev1", 00:33:25.740 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:25.740 "strip_size_kb": 64, 00:33:25.740 "state": "online", 00:33:25.740 "raid_level": "raid5f", 00:33:25.740 "superblock": true, 00:33:25.740 "num_base_bdevs": 4, 00:33:25.740 "num_base_bdevs_discovered": 4, 00:33:25.740 "num_base_bdevs_operational": 4, 00:33:25.740 "process": { 00:33:25.740 "type": "rebuild", 00:33:25.740 "target": "spare", 00:33:25.740 "progress": { 00:33:25.740 "blocks": 172800, 00:33:25.740 "percent": 90 00:33:25.740 } 00:33:25.740 }, 00:33:25.740 "base_bdevs_list": [ 00:33:25.740 { 00:33:25.740 "name": "spare", 00:33:25.740 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:25.740 "is_configured": true, 00:33:25.740 "data_offset": 2048, 00:33:25.740 "data_size": 63488 00:33:25.740 }, 00:33:25.740 { 00:33:25.740 "name": "BaseBdev2", 00:33:25.740 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:25.740 "is_configured": true, 00:33:25.740 "data_offset": 2048, 00:33:25.740 "data_size": 63488 00:33:25.740 }, 00:33:25.740 { 00:33:25.740 "name": "BaseBdev3", 00:33:25.740 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:25.740 "is_configured": true, 00:33:25.740 "data_offset": 2048, 00:33:25.740 "data_size": 63488 00:33:25.740 }, 00:33:25.740 { 00:33:25.740 "name": "BaseBdev4", 00:33:25.740 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:25.740 "is_configured": true, 00:33:25.740 "data_offset": 2048, 00:33:25.740 "data_size": 63488 00:33:25.740 } 00:33:25.740 ] 00:33:25.740 }' 00:33:25.741 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:25.741 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:25.741 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:25.998 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:25.998 17:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:26.566 [2024-11-26 17:29:56.663065] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:26.566 [2024-11-26 17:29:56.663179] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:26.566 [2024-11-26 17:29:56.663348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.824 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:26.825 "name": "raid_bdev1", 00:33:26.825 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:26.825 "strip_size_kb": 64, 00:33:26.825 "state": "online", 00:33:26.825 "raid_level": "raid5f", 00:33:26.825 "superblock": true, 00:33:26.825 "num_base_bdevs": 4, 00:33:26.825 "num_base_bdevs_discovered": 4, 00:33:26.825 "num_base_bdevs_operational": 4, 00:33:26.825 "base_bdevs_list": [ 00:33:26.825 { 00:33:26.825 "name": "spare", 00:33:26.825 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:26.825 "is_configured": true, 00:33:26.825 "data_offset": 2048, 00:33:26.825 "data_size": 63488 00:33:26.825 }, 00:33:26.825 { 00:33:26.825 "name": "BaseBdev2", 00:33:26.825 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:26.825 "is_configured": true, 00:33:26.825 "data_offset": 2048, 00:33:26.825 "data_size": 63488 00:33:26.825 }, 00:33:26.825 { 00:33:26.825 "name": "BaseBdev3", 00:33:26.825 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:26.825 "is_configured": true, 00:33:26.825 "data_offset": 2048, 00:33:26.825 "data_size": 63488 00:33:26.825 }, 00:33:26.825 { 00:33:26.825 "name": "BaseBdev4", 00:33:26.825 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:26.825 "is_configured": true, 00:33:26.825 "data_offset": 2048, 00:33:26.825 "data_size": 63488 00:33:26.825 } 00:33:26.825 ] 00:33:26.825 }' 00:33:26.825 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:27.083 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:27.083 17:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.083 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:27.083 "name": "raid_bdev1", 00:33:27.083 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:27.083 "strip_size_kb": 64, 00:33:27.083 "state": "online", 00:33:27.083 "raid_level": "raid5f", 00:33:27.083 "superblock": true, 00:33:27.083 "num_base_bdevs": 4, 00:33:27.083 "num_base_bdevs_discovered": 4, 00:33:27.083 "num_base_bdevs_operational": 4, 00:33:27.083 "base_bdevs_list": [ 00:33:27.083 { 00:33:27.084 "name": "spare", 00:33:27.084 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:27.084 "is_configured": true, 00:33:27.084 "data_offset": 2048, 00:33:27.084 "data_size": 63488 00:33:27.084 }, 00:33:27.084 { 00:33:27.084 "name": "BaseBdev2", 00:33:27.084 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:27.084 "is_configured": true, 00:33:27.084 "data_offset": 2048, 00:33:27.084 "data_size": 63488 00:33:27.084 }, 00:33:27.084 { 00:33:27.084 "name": "BaseBdev3", 00:33:27.084 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:27.084 "is_configured": true, 00:33:27.084 "data_offset": 2048, 00:33:27.084 "data_size": 63488 00:33:27.084 }, 00:33:27.084 { 00:33:27.084 "name": "BaseBdev4", 00:33:27.084 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:27.084 "is_configured": true, 00:33:27.084 "data_offset": 2048, 00:33:27.084 "data_size": 63488 00:33:27.084 } 00:33:27.084 ] 00:33:27.084 }' 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.084 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.342 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.342 "name": "raid_bdev1", 00:33:27.342 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:27.342 "strip_size_kb": 64, 00:33:27.342 "state": "online", 00:33:27.342 "raid_level": "raid5f", 00:33:27.342 "superblock": true, 00:33:27.342 "num_base_bdevs": 4, 00:33:27.342 "num_base_bdevs_discovered": 4, 00:33:27.342 "num_base_bdevs_operational": 4, 00:33:27.342 "base_bdevs_list": [ 00:33:27.342 { 00:33:27.342 "name": "spare", 00:33:27.342 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:27.342 "is_configured": true, 00:33:27.342 "data_offset": 2048, 00:33:27.342 "data_size": 63488 00:33:27.342 }, 00:33:27.342 { 00:33:27.342 "name": "BaseBdev2", 00:33:27.342 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:27.342 "is_configured": true, 00:33:27.342 "data_offset": 2048, 00:33:27.342 "data_size": 63488 00:33:27.342 }, 00:33:27.342 { 00:33:27.342 "name": "BaseBdev3", 00:33:27.342 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:27.342 "is_configured": true, 00:33:27.342 "data_offset": 2048, 00:33:27.342 "data_size": 63488 00:33:27.342 }, 00:33:27.342 { 00:33:27.342 "name": "BaseBdev4", 00:33:27.342 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:27.342 "is_configured": true, 00:33:27.342 "data_offset": 2048, 00:33:27.342 "data_size": 63488 00:33:27.342 } 00:33:27.342 ] 00:33:27.342 }' 00:33:27.342 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.342 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.601 [2024-11-26 17:29:57.591711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:27.601 [2024-11-26 17:29:57.591760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:27.601 [2024-11-26 17:29:57.591878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:27.601 [2024-11-26 17:29:57.592002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:27.601 [2024-11-26 17:29:57.592033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:27.601 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:27.860 /dev/nbd0 00:33:27.860 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:27.860 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:27.860 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:27.861 1+0 records in 00:33:27.861 1+0 records out 00:33:27.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330115 s, 12.4 MB/s 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:27.861 17:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:28.125 /dev/nbd1 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:28.125 1+0 records in 00:33:28.125 1+0 records out 00:33:28.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508127 s, 8.1 MB/s 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:33:28.125 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.384 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.643 17:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.210 [2024-11-26 17:29:59.046589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:29.210 [2024-11-26 17:29:59.046661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.210 [2024-11-26 17:29:59.046696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:29.210 [2024-11-26 17:29:59.046712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.210 [2024-11-26 17:29:59.049877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.210 [2024-11-26 17:29:59.049940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:29.210 [2024-11-26 17:29:59.050055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:29.210 [2024-11-26 17:29:59.050129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:29.210 [2024-11-26 17:29:59.050299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:29.210 [2024-11-26 17:29:59.050418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:29.210 [2024-11-26 17:29:59.050536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:29.210 spare 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.210 [2024-11-26 17:29:59.150546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:29.210 [2024-11-26 17:29:59.150636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:29.210 [2024-11-26 17:29:59.151093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:33:29.210 [2024-11-26 17:29:59.160026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:29.210 [2024-11-26 17:29:59.160076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:29.210 [2024-11-26 17:29:59.160374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.210 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.210 "name": "raid_bdev1", 00:33:29.210 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:29.210 "strip_size_kb": 64, 00:33:29.210 "state": "online", 00:33:29.210 "raid_level": "raid5f", 00:33:29.210 "superblock": true, 00:33:29.210 "num_base_bdevs": 4, 00:33:29.210 "num_base_bdevs_discovered": 4, 00:33:29.210 "num_base_bdevs_operational": 4, 00:33:29.210 "base_bdevs_list": [ 00:33:29.210 { 00:33:29.210 "name": "spare", 00:33:29.210 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:29.210 "is_configured": true, 00:33:29.210 "data_offset": 2048, 00:33:29.210 "data_size": 63488 00:33:29.210 }, 00:33:29.210 { 00:33:29.211 "name": "BaseBdev2", 00:33:29.211 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:29.211 "is_configured": true, 00:33:29.211 "data_offset": 2048, 00:33:29.211 "data_size": 63488 00:33:29.211 }, 00:33:29.211 { 00:33:29.211 "name": "BaseBdev3", 00:33:29.211 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:29.211 "is_configured": true, 00:33:29.211 "data_offset": 2048, 00:33:29.211 "data_size": 63488 00:33:29.211 }, 00:33:29.211 { 00:33:29.211 "name": "BaseBdev4", 00:33:29.211 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:29.211 "is_configured": true, 00:33:29.211 "data_offset": 2048, 00:33:29.211 "data_size": 63488 00:33:29.211 } 00:33:29.211 ] 00:33:29.211 }' 00:33:29.211 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.211 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:29.778 "name": "raid_bdev1", 00:33:29.778 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:29.778 "strip_size_kb": 64, 00:33:29.778 "state": "online", 00:33:29.778 "raid_level": "raid5f", 00:33:29.778 "superblock": true, 00:33:29.778 "num_base_bdevs": 4, 00:33:29.778 "num_base_bdevs_discovered": 4, 00:33:29.778 "num_base_bdevs_operational": 4, 00:33:29.778 "base_bdevs_list": [ 00:33:29.778 { 00:33:29.778 "name": "spare", 00:33:29.778 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:29.778 "is_configured": true, 00:33:29.778 "data_offset": 2048, 00:33:29.778 "data_size": 63488 00:33:29.778 }, 00:33:29.778 { 00:33:29.778 "name": "BaseBdev2", 00:33:29.778 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:29.778 "is_configured": true, 00:33:29.778 "data_offset": 2048, 00:33:29.778 "data_size": 63488 00:33:29.778 }, 00:33:29.778 { 00:33:29.778 "name": "BaseBdev3", 00:33:29.778 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:29.778 "is_configured": true, 00:33:29.778 "data_offset": 2048, 00:33:29.778 "data_size": 63488 00:33:29.778 }, 00:33:29.778 { 00:33:29.778 "name": "BaseBdev4", 00:33:29.778 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:29.778 "is_configured": true, 00:33:29.778 "data_offset": 2048, 00:33:29.778 "data_size": 63488 00:33:29.778 } 00:33:29.778 ] 00:33:29.778 }' 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.778 [2024-11-26 17:29:59.781810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.778 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.779 "name": "raid_bdev1", 00:33:29.779 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:29.779 "strip_size_kb": 64, 00:33:29.779 "state": "online", 00:33:29.779 "raid_level": "raid5f", 00:33:29.779 "superblock": true, 00:33:29.779 "num_base_bdevs": 4, 00:33:29.779 "num_base_bdevs_discovered": 3, 00:33:29.779 "num_base_bdevs_operational": 3, 00:33:29.779 "base_bdevs_list": [ 00:33:29.779 { 00:33:29.779 "name": null, 00:33:29.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.779 "is_configured": false, 00:33:29.779 "data_offset": 0, 00:33:29.779 "data_size": 63488 00:33:29.779 }, 00:33:29.779 { 00:33:29.779 "name": "BaseBdev2", 00:33:29.779 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:29.779 "is_configured": true, 00:33:29.779 "data_offset": 2048, 00:33:29.779 "data_size": 63488 00:33:29.779 }, 00:33:29.779 { 00:33:29.779 "name": "BaseBdev3", 00:33:29.779 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:29.779 "is_configured": true, 00:33:29.779 "data_offset": 2048, 00:33:29.779 "data_size": 63488 00:33:29.779 }, 00:33:29.779 { 00:33:29.779 "name": "BaseBdev4", 00:33:29.779 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:29.779 "is_configured": true, 00:33:29.779 "data_offset": 2048, 00:33:29.779 "data_size": 63488 00:33:29.779 } 00:33:29.779 ] 00:33:29.779 }' 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.779 17:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.347 17:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:30.347 17:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.347 17:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.347 [2024-11-26 17:30:00.225866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.347 [2024-11-26 17:30:00.226128] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:30.347 [2024-11-26 17:30:00.226170] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:30.347 [2024-11-26 17:30:00.226231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.347 [2024-11-26 17:30:00.244716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:33:30.347 17:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.347 17:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:30.347 [2024-11-26 17:30:00.256611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:31.284 "name": "raid_bdev1", 00:33:31.284 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:31.284 "strip_size_kb": 64, 00:33:31.284 "state": "online", 00:33:31.284 "raid_level": "raid5f", 00:33:31.284 "superblock": true, 00:33:31.284 "num_base_bdevs": 4, 00:33:31.284 "num_base_bdevs_discovered": 4, 00:33:31.284 "num_base_bdevs_operational": 4, 00:33:31.284 "process": { 00:33:31.284 "type": "rebuild", 00:33:31.284 "target": "spare", 00:33:31.284 "progress": { 00:33:31.284 "blocks": 17280, 00:33:31.284 "percent": 9 00:33:31.284 } 00:33:31.284 }, 00:33:31.284 "base_bdevs_list": [ 00:33:31.284 { 00:33:31.284 "name": "spare", 00:33:31.284 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:31.284 "is_configured": true, 00:33:31.284 "data_offset": 2048, 00:33:31.284 "data_size": 63488 00:33:31.284 }, 00:33:31.284 { 00:33:31.284 "name": "BaseBdev2", 00:33:31.284 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:31.284 "is_configured": true, 00:33:31.284 "data_offset": 2048, 00:33:31.284 "data_size": 63488 00:33:31.284 }, 00:33:31.284 { 00:33:31.284 "name": "BaseBdev3", 00:33:31.284 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:31.284 "is_configured": true, 00:33:31.284 "data_offset": 2048, 00:33:31.284 "data_size": 63488 00:33:31.284 }, 00:33:31.284 { 00:33:31.284 "name": "BaseBdev4", 00:33:31.284 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:31.284 "is_configured": true, 00:33:31.284 "data_offset": 2048, 00:33:31.284 "data_size": 63488 00:33:31.284 } 00:33:31.284 ] 00:33:31.284 }' 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.284 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.284 [2024-11-26 17:30:01.384438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:31.543 [2024-11-26 17:30:01.466624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:31.543 [2024-11-26 17:30:01.466749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.543 [2024-11-26 17:30:01.466772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:31.543 [2024-11-26 17:30:01.466790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.543 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.544 "name": "raid_bdev1", 00:33:31.544 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:31.544 "strip_size_kb": 64, 00:33:31.544 "state": "online", 00:33:31.544 "raid_level": "raid5f", 00:33:31.544 "superblock": true, 00:33:31.544 "num_base_bdevs": 4, 00:33:31.544 "num_base_bdevs_discovered": 3, 00:33:31.544 "num_base_bdevs_operational": 3, 00:33:31.544 "base_bdevs_list": [ 00:33:31.544 { 00:33:31.544 "name": null, 00:33:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.544 "is_configured": false, 00:33:31.544 "data_offset": 0, 00:33:31.544 "data_size": 63488 00:33:31.544 }, 00:33:31.544 { 00:33:31.544 "name": "BaseBdev2", 00:33:31.544 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:31.544 "is_configured": true, 00:33:31.544 "data_offset": 2048, 00:33:31.544 "data_size": 63488 00:33:31.544 }, 00:33:31.544 { 00:33:31.544 "name": "BaseBdev3", 00:33:31.544 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:31.544 "is_configured": true, 00:33:31.544 "data_offset": 2048, 00:33:31.544 "data_size": 63488 00:33:31.544 }, 00:33:31.544 { 00:33:31.544 "name": "BaseBdev4", 00:33:31.544 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:31.544 "is_configured": true, 00:33:31.544 "data_offset": 2048, 00:33:31.544 "data_size": 63488 00:33:31.544 } 00:33:31.544 ] 00:33:31.544 }' 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.544 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.803 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:31.803 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.803 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.803 [2024-11-26 17:30:01.907414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:31.803 [2024-11-26 17:30:01.907506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.803 [2024-11-26 17:30:01.907550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:31.803 [2024-11-26 17:30:01.907568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.803 [2024-11-26 17:30:01.908156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.803 [2024-11-26 17:30:01.908198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:31.803 [2024-11-26 17:30:01.908312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:31.803 [2024-11-26 17:30:01.908337] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:31.803 [2024-11-26 17:30:01.908350] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:31.803 [2024-11-26 17:30:01.908385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.061 [2024-11-26 17:30:01.924177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:33:32.061 spare 00:33:32.061 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.061 17:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:32.061 [2024-11-26 17:30:01.934195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:33.064 "name": "raid_bdev1", 00:33:33.064 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:33.064 "strip_size_kb": 64, 00:33:33.064 "state": "online", 00:33:33.064 "raid_level": "raid5f", 00:33:33.064 "superblock": true, 00:33:33.064 "num_base_bdevs": 4, 00:33:33.064 "num_base_bdevs_discovered": 4, 00:33:33.064 "num_base_bdevs_operational": 4, 00:33:33.064 "process": { 00:33:33.064 "type": "rebuild", 00:33:33.064 "target": "spare", 00:33:33.064 "progress": { 00:33:33.064 "blocks": 19200, 00:33:33.064 "percent": 10 00:33:33.064 } 00:33:33.064 }, 00:33:33.064 "base_bdevs_list": [ 00:33:33.064 { 00:33:33.064 "name": "spare", 00:33:33.064 "uuid": "c31bd0a1-7d72-5c64-bcea-25294c2299b6", 00:33:33.064 "is_configured": true, 00:33:33.064 "data_offset": 2048, 00:33:33.064 "data_size": 63488 00:33:33.064 }, 00:33:33.064 { 00:33:33.064 "name": "BaseBdev2", 00:33:33.064 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:33.064 "is_configured": true, 00:33:33.064 "data_offset": 2048, 00:33:33.064 "data_size": 63488 00:33:33.064 }, 00:33:33.064 { 00:33:33.064 "name": "BaseBdev3", 00:33:33.064 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:33.064 "is_configured": true, 00:33:33.064 "data_offset": 2048, 00:33:33.064 "data_size": 63488 00:33:33.064 }, 00:33:33.064 { 00:33:33.064 "name": "BaseBdev4", 00:33:33.064 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:33.064 "is_configured": true, 00:33:33.064 "data_offset": 2048, 00:33:33.064 "data_size": 63488 00:33:33.064 } 00:33:33.064 ] 00:33:33.064 }' 00:33:33.064 17:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.064 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.064 [2024-11-26 17:30:03.094194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:33.064 [2024-11-26 17:30:03.144036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:33.064 [2024-11-26 17:30:03.144129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:33.064 [2024-11-26 17:30:03.144160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:33.064 [2024-11-26 17:30:03.144170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.322 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:33.323 "name": "raid_bdev1", 00:33:33.323 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:33.323 "strip_size_kb": 64, 00:33:33.323 "state": "online", 00:33:33.323 "raid_level": "raid5f", 00:33:33.323 "superblock": true, 00:33:33.323 "num_base_bdevs": 4, 00:33:33.323 "num_base_bdevs_discovered": 3, 00:33:33.323 "num_base_bdevs_operational": 3, 00:33:33.323 "base_bdevs_list": [ 00:33:33.323 { 00:33:33.323 "name": null, 00:33:33.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.323 "is_configured": false, 00:33:33.323 "data_offset": 0, 00:33:33.323 "data_size": 63488 00:33:33.323 }, 00:33:33.323 { 00:33:33.323 "name": "BaseBdev2", 00:33:33.323 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:33.323 "is_configured": true, 00:33:33.323 "data_offset": 2048, 00:33:33.323 "data_size": 63488 00:33:33.323 }, 00:33:33.323 { 00:33:33.323 "name": "BaseBdev3", 00:33:33.323 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:33.323 "is_configured": true, 00:33:33.323 "data_offset": 2048, 00:33:33.323 "data_size": 63488 00:33:33.323 }, 00:33:33.323 { 00:33:33.323 "name": "BaseBdev4", 00:33:33.323 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:33.323 "is_configured": true, 00:33:33.323 "data_offset": 2048, 00:33:33.323 "data_size": 63488 00:33:33.323 } 00:33:33.323 ] 00:33:33.323 }' 00:33:33.323 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:33.323 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.580 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:33.580 "name": "raid_bdev1", 00:33:33.580 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:33.580 "strip_size_kb": 64, 00:33:33.580 "state": "online", 00:33:33.580 "raid_level": "raid5f", 00:33:33.580 "superblock": true, 00:33:33.580 "num_base_bdevs": 4, 00:33:33.580 "num_base_bdevs_discovered": 3, 00:33:33.580 "num_base_bdevs_operational": 3, 00:33:33.580 "base_bdevs_list": [ 00:33:33.580 { 00:33:33.580 "name": null, 00:33:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.580 "is_configured": false, 00:33:33.580 "data_offset": 0, 00:33:33.580 "data_size": 63488 00:33:33.580 }, 00:33:33.580 { 00:33:33.580 "name": "BaseBdev2", 00:33:33.581 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:33.581 "is_configured": true, 00:33:33.581 "data_offset": 2048, 00:33:33.581 "data_size": 63488 00:33:33.581 }, 00:33:33.581 { 00:33:33.581 "name": "BaseBdev3", 00:33:33.581 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:33.581 "is_configured": true, 00:33:33.581 "data_offset": 2048, 00:33:33.581 "data_size": 63488 00:33:33.581 }, 00:33:33.581 { 00:33:33.581 "name": "BaseBdev4", 00:33:33.581 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:33.581 "is_configured": true, 00:33:33.581 "data_offset": 2048, 00:33:33.581 "data_size": 63488 00:33:33.581 } 00:33:33.581 ] 00:33:33.581 }' 00:33:33.581 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:33.581 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:33.581 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:33.838 [2024-11-26 17:30:03.716904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:33.838 [2024-11-26 17:30:03.717011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.838 [2024-11-26 17:30:03.717058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:33:33.838 [2024-11-26 17:30:03.717088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.838 [2024-11-26 17:30:03.717935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.838 [2024-11-26 17:30:03.718002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:33.838 [2024-11-26 17:30:03.718169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:33.838 [2024-11-26 17:30:03.718209] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:33.838 [2024-11-26 17:30:03.718238] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:33.838 [2024-11-26 17:30:03.718261] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:33.838 BaseBdev1 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.838 17:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.772 "name": "raid_bdev1", 00:33:34.772 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:34.772 "strip_size_kb": 64, 00:33:34.772 "state": "online", 00:33:34.772 "raid_level": "raid5f", 00:33:34.772 "superblock": true, 00:33:34.772 "num_base_bdevs": 4, 00:33:34.772 "num_base_bdevs_discovered": 3, 00:33:34.772 "num_base_bdevs_operational": 3, 00:33:34.772 "base_bdevs_list": [ 00:33:34.772 { 00:33:34.772 "name": null, 00:33:34.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.772 "is_configured": false, 00:33:34.772 "data_offset": 0, 00:33:34.772 "data_size": 63488 00:33:34.772 }, 00:33:34.772 { 00:33:34.772 "name": "BaseBdev2", 00:33:34.772 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:34.772 "is_configured": true, 00:33:34.772 "data_offset": 2048, 00:33:34.772 "data_size": 63488 00:33:34.772 }, 00:33:34.772 { 00:33:34.772 "name": "BaseBdev3", 00:33:34.772 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:34.772 "is_configured": true, 00:33:34.772 "data_offset": 2048, 00:33:34.772 "data_size": 63488 00:33:34.772 }, 00:33:34.772 { 00:33:34.772 "name": "BaseBdev4", 00:33:34.772 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:34.772 "is_configured": true, 00:33:34.772 "data_offset": 2048, 00:33:34.772 "data_size": 63488 00:33:34.772 } 00:33:34.772 ] 00:33:34.772 }' 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.772 17:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:35.338 "name": "raid_bdev1", 00:33:35.338 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:35.338 "strip_size_kb": 64, 00:33:35.338 "state": "online", 00:33:35.338 "raid_level": "raid5f", 00:33:35.338 "superblock": true, 00:33:35.338 "num_base_bdevs": 4, 00:33:35.338 "num_base_bdevs_discovered": 3, 00:33:35.338 "num_base_bdevs_operational": 3, 00:33:35.338 "base_bdevs_list": [ 00:33:35.338 { 00:33:35.338 "name": null, 00:33:35.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.338 "is_configured": false, 00:33:35.338 "data_offset": 0, 00:33:35.338 "data_size": 63488 00:33:35.338 }, 00:33:35.338 { 00:33:35.338 "name": "BaseBdev2", 00:33:35.338 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:35.338 "is_configured": true, 00:33:35.338 "data_offset": 2048, 00:33:35.338 "data_size": 63488 00:33:35.338 }, 00:33:35.338 { 00:33:35.338 "name": "BaseBdev3", 00:33:35.338 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:35.338 "is_configured": true, 00:33:35.338 "data_offset": 2048, 00:33:35.338 "data_size": 63488 00:33:35.338 }, 00:33:35.338 { 00:33:35.338 "name": "BaseBdev4", 00:33:35.338 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:35.338 "is_configured": true, 00:33:35.338 "data_offset": 2048, 00:33:35.338 "data_size": 63488 00:33:35.338 } 00:33:35.338 ] 00:33:35.338 }' 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.338 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.338 [2024-11-26 17:30:05.334759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:35.338 [2024-11-26 17:30:05.335050] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:35.338 [2024-11-26 17:30:05.335098] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:35.338 request: 00:33:35.338 { 00:33:35.338 "base_bdev": "BaseBdev1", 00:33:35.338 "raid_bdev": "raid_bdev1", 00:33:35.339 "method": "bdev_raid_add_base_bdev", 00:33:35.339 "req_id": 1 00:33:35.339 } 00:33:35.339 Got JSON-RPC error response 00:33:35.339 response: 00:33:35.339 { 00:33:35.339 "code": -22, 00:33:35.339 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:35.339 } 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:35.339 17:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.275 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.534 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.534 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:36.534 "name": "raid_bdev1", 00:33:36.534 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:36.534 "strip_size_kb": 64, 00:33:36.534 "state": "online", 00:33:36.534 "raid_level": "raid5f", 00:33:36.534 "superblock": true, 00:33:36.534 "num_base_bdevs": 4, 00:33:36.534 "num_base_bdevs_discovered": 3, 00:33:36.534 "num_base_bdevs_operational": 3, 00:33:36.534 "base_bdevs_list": [ 00:33:36.534 { 00:33:36.534 "name": null, 00:33:36.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.534 "is_configured": false, 00:33:36.534 "data_offset": 0, 00:33:36.534 "data_size": 63488 00:33:36.534 }, 00:33:36.534 { 00:33:36.534 "name": "BaseBdev2", 00:33:36.534 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:36.534 "is_configured": true, 00:33:36.534 "data_offset": 2048, 00:33:36.534 "data_size": 63488 00:33:36.534 }, 00:33:36.534 { 00:33:36.534 "name": "BaseBdev3", 00:33:36.534 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:36.534 "is_configured": true, 00:33:36.534 "data_offset": 2048, 00:33:36.534 "data_size": 63488 00:33:36.534 }, 00:33:36.534 { 00:33:36.534 "name": "BaseBdev4", 00:33:36.534 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:36.534 "is_configured": true, 00:33:36.534 "data_offset": 2048, 00:33:36.534 "data_size": 63488 00:33:36.534 } 00:33:36.534 ] 00:33:36.534 }' 00:33:36.534 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:36.534 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:36.793 "name": "raid_bdev1", 00:33:36.793 "uuid": "95f1bc5f-ed7c-41b9-a8b6-a978cfd4f16a", 00:33:36.793 "strip_size_kb": 64, 00:33:36.793 "state": "online", 00:33:36.793 "raid_level": "raid5f", 00:33:36.793 "superblock": true, 00:33:36.793 "num_base_bdevs": 4, 00:33:36.793 "num_base_bdevs_discovered": 3, 00:33:36.793 "num_base_bdevs_operational": 3, 00:33:36.793 "base_bdevs_list": [ 00:33:36.793 { 00:33:36.793 "name": null, 00:33:36.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.793 "is_configured": false, 00:33:36.793 "data_offset": 0, 00:33:36.793 "data_size": 63488 00:33:36.793 }, 00:33:36.793 { 00:33:36.793 "name": "BaseBdev2", 00:33:36.793 "uuid": "036e9b1c-4fb7-5ce9-a7ca-2ca7c373ea45", 00:33:36.793 "is_configured": true, 00:33:36.793 "data_offset": 2048, 00:33:36.793 "data_size": 63488 00:33:36.793 }, 00:33:36.793 { 00:33:36.793 "name": "BaseBdev3", 00:33:36.793 "uuid": "b15cea63-11fe-511b-97d8-03486368e9b3", 00:33:36.793 "is_configured": true, 00:33:36.793 "data_offset": 2048, 00:33:36.793 "data_size": 63488 00:33:36.793 }, 00:33:36.793 { 00:33:36.793 "name": "BaseBdev4", 00:33:36.793 "uuid": "d5f5ca30-0d65-5748-b22e-864769ad82e6", 00:33:36.793 "is_configured": true, 00:33:36.793 "data_offset": 2048, 00:33:36.793 "data_size": 63488 00:33:36.793 } 00:33:36.793 ] 00:33:36.793 }' 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:36.793 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85289 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85289 ']' 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85289 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85289 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.053 killing process with pid 85289 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85289' 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85289 00:33:37.053 Received shutdown signal, test time was about 60.000000 seconds 00:33:37.053 00:33:37.053 Latency(us) 00:33:37.053 [2024-11-26T17:30:07.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.053 [2024-11-26T17:30:07.167Z] =================================================================================================================== 00:33:37.053 [2024-11-26T17:30:07.167Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:37.053 [2024-11-26 17:30:06.969149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:37.053 17:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85289 00:33:37.053 [2024-11-26 17:30:06.969321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:37.053 [2024-11-26 17:30:06.969428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:37.053 [2024-11-26 17:30:06.969448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:37.621 [2024-11-26 17:30:07.479694] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:38.995 17:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:33:38.995 00:33:38.995 real 0m27.474s 00:33:38.995 user 0m34.210s 00:33:38.995 sys 0m3.632s 00:33:38.995 17:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.995 ************************************ 00:33:38.995 END TEST raid5f_rebuild_test_sb 00:33:38.995 ************************************ 00:33:38.995 17:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:38.995 17:30:08 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:33:38.995 17:30:08 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:33:38.995 17:30:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:38.995 17:30:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.995 17:30:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:38.995 ************************************ 00:33:38.995 START TEST raid_state_function_test_sb_4k 00:33:38.995 ************************************ 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86104 00:33:38.995 Process raid pid: 86104 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86104' 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86104 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86104 ']' 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.995 17:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:38.995 [2024-11-26 17:30:08.893872] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:33:38.995 [2024-11-26 17:30:08.894029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.995 [2024-11-26 17:30:09.082700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.255 [2024-11-26 17:30:09.237633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.513 [2024-11-26 17:30:09.481716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:39.513 [2024-11-26 17:30:09.481774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:39.772 [2024-11-26 17:30:09.773605] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:39.772 [2024-11-26 17:30:09.773681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:39.772 [2024-11-26 17:30:09.773695] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:39.772 [2024-11-26 17:30:09.773710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.772 "name": "Existed_Raid", 00:33:39.772 "uuid": "06c4f93f-46e7-4f20-aeef-e06de6defefa", 00:33:39.772 "strip_size_kb": 0, 00:33:39.772 "state": "configuring", 00:33:39.772 "raid_level": "raid1", 00:33:39.772 "superblock": true, 00:33:39.772 "num_base_bdevs": 2, 00:33:39.772 "num_base_bdevs_discovered": 0, 00:33:39.772 "num_base_bdevs_operational": 2, 00:33:39.772 "base_bdevs_list": [ 00:33:39.772 { 00:33:39.772 "name": "BaseBdev1", 00:33:39.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.772 "is_configured": false, 00:33:39.772 "data_offset": 0, 00:33:39.772 "data_size": 0 00:33:39.772 }, 00:33:39.772 { 00:33:39.772 "name": "BaseBdev2", 00:33:39.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.772 "is_configured": false, 00:33:39.772 "data_offset": 0, 00:33:39.772 "data_size": 0 00:33:39.772 } 00:33:39.772 ] 00:33:39.772 }' 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.772 17:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.031 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:40.031 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.031 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.031 [2024-11-26 17:30:10.141018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:40.031 [2024-11-26 17:30:10.141062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.290 [2024-11-26 17:30:10.152980] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:40.290 [2024-11-26 17:30:10.153031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:40.290 [2024-11-26 17:30:10.153043] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.290 [2024-11-26 17:30:10.153061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.290 [2024-11-26 17:30:10.202512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:40.290 BaseBdev1 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.290 [ 00:33:40.290 { 00:33:40.290 "name": "BaseBdev1", 00:33:40.290 "aliases": [ 00:33:40.290 "3078fbb9-c674-491e-bb44-bada907ba4be" 00:33:40.290 ], 00:33:40.290 "product_name": "Malloc disk", 00:33:40.290 "block_size": 4096, 00:33:40.290 "num_blocks": 8192, 00:33:40.290 "uuid": "3078fbb9-c674-491e-bb44-bada907ba4be", 00:33:40.290 "assigned_rate_limits": { 00:33:40.290 "rw_ios_per_sec": 0, 00:33:40.290 "rw_mbytes_per_sec": 0, 00:33:40.290 "r_mbytes_per_sec": 0, 00:33:40.290 "w_mbytes_per_sec": 0 00:33:40.290 }, 00:33:40.290 "claimed": true, 00:33:40.290 "claim_type": "exclusive_write", 00:33:40.290 "zoned": false, 00:33:40.290 "supported_io_types": { 00:33:40.290 "read": true, 00:33:40.290 "write": true, 00:33:40.290 "unmap": true, 00:33:40.290 "flush": true, 00:33:40.290 "reset": true, 00:33:40.290 "nvme_admin": false, 00:33:40.290 "nvme_io": false, 00:33:40.290 "nvme_io_md": false, 00:33:40.290 "write_zeroes": true, 00:33:40.290 "zcopy": true, 00:33:40.290 "get_zone_info": false, 00:33:40.290 "zone_management": false, 00:33:40.290 "zone_append": false, 00:33:40.290 "compare": false, 00:33:40.290 "compare_and_write": false, 00:33:40.290 "abort": true, 00:33:40.290 "seek_hole": false, 00:33:40.290 "seek_data": false, 00:33:40.290 "copy": true, 00:33:40.290 "nvme_iov_md": false 00:33:40.290 }, 00:33:40.290 "memory_domains": [ 00:33:40.290 { 00:33:40.290 "dma_device_id": "system", 00:33:40.290 "dma_device_type": 1 00:33:40.290 }, 00:33:40.290 { 00:33:40.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:40.290 "dma_device_type": 2 00:33:40.290 } 00:33:40.290 ], 00:33:40.290 "driver_specific": {} 00:33:40.290 } 00:33:40.290 ] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.290 "name": "Existed_Raid", 00:33:40.290 "uuid": "bfd29dbe-9841-4629-b4ee-40f71ea00cba", 00:33:40.290 "strip_size_kb": 0, 00:33:40.290 "state": "configuring", 00:33:40.290 "raid_level": "raid1", 00:33:40.290 "superblock": true, 00:33:40.290 "num_base_bdevs": 2, 00:33:40.290 "num_base_bdevs_discovered": 1, 00:33:40.290 "num_base_bdevs_operational": 2, 00:33:40.290 "base_bdevs_list": [ 00:33:40.290 { 00:33:40.290 "name": "BaseBdev1", 00:33:40.290 "uuid": "3078fbb9-c674-491e-bb44-bada907ba4be", 00:33:40.290 "is_configured": true, 00:33:40.290 "data_offset": 256, 00:33:40.290 "data_size": 7936 00:33:40.290 }, 00:33:40.290 { 00:33:40.290 "name": "BaseBdev2", 00:33:40.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.290 "is_configured": false, 00:33:40.290 "data_offset": 0, 00:33:40.290 "data_size": 0 00:33:40.290 } 00:33:40.290 ] 00:33:40.290 }' 00:33:40.290 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.291 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 [2024-11-26 17:30:10.685908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:40.862 [2024-11-26 17:30:10.685975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.862 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 [2024-11-26 17:30:10.697947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:40.862 [2024-11-26 17:30:10.700419] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.862 [2024-11-26 17:30:10.700473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.863 "name": "Existed_Raid", 00:33:40.863 "uuid": "1bf52488-bcc0-45f5-a7f9-927f57862b0d", 00:33:40.863 "strip_size_kb": 0, 00:33:40.863 "state": "configuring", 00:33:40.863 "raid_level": "raid1", 00:33:40.863 "superblock": true, 00:33:40.863 "num_base_bdevs": 2, 00:33:40.863 "num_base_bdevs_discovered": 1, 00:33:40.863 "num_base_bdevs_operational": 2, 00:33:40.863 "base_bdevs_list": [ 00:33:40.863 { 00:33:40.863 "name": "BaseBdev1", 00:33:40.863 "uuid": "3078fbb9-c674-491e-bb44-bada907ba4be", 00:33:40.863 "is_configured": true, 00:33:40.863 "data_offset": 256, 00:33:40.863 "data_size": 7936 00:33:40.863 }, 00:33:40.863 { 00:33:40.863 "name": "BaseBdev2", 00:33:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.863 "is_configured": false, 00:33:40.863 "data_offset": 0, 00:33:40.863 "data_size": 0 00:33:40.863 } 00:33:40.863 ] 00:33:40.863 }' 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.863 17:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.122 [2024-11-26 17:30:11.183428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:41.122 [2024-11-26 17:30:11.184076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:41.122 [2024-11-26 17:30:11.184225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:41.122 [2024-11-26 17:30:11.184661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:41.122 BaseBdev2 00:33:41.122 [2024-11-26 17:30:11.184999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:41.122 [2024-11-26 17:30:11.185028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:41.122 [2024-11-26 17:30:11.185220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.122 [ 00:33:41.122 { 00:33:41.122 "name": "BaseBdev2", 00:33:41.122 "aliases": [ 00:33:41.122 "163a3364-98fd-4753-9a1d-c499cf2ba7f1" 00:33:41.122 ], 00:33:41.122 "product_name": "Malloc disk", 00:33:41.122 "block_size": 4096, 00:33:41.122 "num_blocks": 8192, 00:33:41.122 "uuid": "163a3364-98fd-4753-9a1d-c499cf2ba7f1", 00:33:41.122 "assigned_rate_limits": { 00:33:41.122 "rw_ios_per_sec": 0, 00:33:41.122 "rw_mbytes_per_sec": 0, 00:33:41.122 "r_mbytes_per_sec": 0, 00:33:41.122 "w_mbytes_per_sec": 0 00:33:41.122 }, 00:33:41.122 "claimed": true, 00:33:41.122 "claim_type": "exclusive_write", 00:33:41.122 "zoned": false, 00:33:41.122 "supported_io_types": { 00:33:41.122 "read": true, 00:33:41.122 "write": true, 00:33:41.122 "unmap": true, 00:33:41.122 "flush": true, 00:33:41.122 "reset": true, 00:33:41.122 "nvme_admin": false, 00:33:41.122 "nvme_io": false, 00:33:41.122 "nvme_io_md": false, 00:33:41.122 "write_zeroes": true, 00:33:41.122 "zcopy": true, 00:33:41.122 "get_zone_info": false, 00:33:41.122 "zone_management": false, 00:33:41.122 "zone_append": false, 00:33:41.122 "compare": false, 00:33:41.122 "compare_and_write": false, 00:33:41.122 "abort": true, 00:33:41.122 "seek_hole": false, 00:33:41.122 "seek_data": false, 00:33:41.122 "copy": true, 00:33:41.122 "nvme_iov_md": false 00:33:41.122 }, 00:33:41.122 "memory_domains": [ 00:33:41.122 { 00:33:41.122 "dma_device_id": "system", 00:33:41.122 "dma_device_type": 1 00:33:41.122 }, 00:33:41.122 { 00:33:41.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.122 "dma_device_type": 2 00:33:41.122 } 00:33:41.122 ], 00:33:41.122 "driver_specific": {} 00:33:41.122 } 00:33:41.122 ] 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.122 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.123 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.123 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.382 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.382 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.382 "name": "Existed_Raid", 00:33:41.382 "uuid": "1bf52488-bcc0-45f5-a7f9-927f57862b0d", 00:33:41.382 "strip_size_kb": 0, 00:33:41.382 "state": "online", 00:33:41.382 "raid_level": "raid1", 00:33:41.382 "superblock": true, 00:33:41.382 "num_base_bdevs": 2, 00:33:41.382 "num_base_bdevs_discovered": 2, 00:33:41.382 "num_base_bdevs_operational": 2, 00:33:41.382 "base_bdevs_list": [ 00:33:41.382 { 00:33:41.382 "name": "BaseBdev1", 00:33:41.382 "uuid": "3078fbb9-c674-491e-bb44-bada907ba4be", 00:33:41.382 "is_configured": true, 00:33:41.382 "data_offset": 256, 00:33:41.382 "data_size": 7936 00:33:41.382 }, 00:33:41.382 { 00:33:41.382 "name": "BaseBdev2", 00:33:41.382 "uuid": "163a3364-98fd-4753-9a1d-c499cf2ba7f1", 00:33:41.382 "is_configured": true, 00:33:41.382 "data_offset": 256, 00:33:41.382 "data_size": 7936 00:33:41.382 } 00:33:41.382 ] 00:33:41.382 }' 00:33:41.382 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.382 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.641 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.642 [2024-11-26 17:30:11.635216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.642 "name": "Existed_Raid", 00:33:41.642 "aliases": [ 00:33:41.642 "1bf52488-bcc0-45f5-a7f9-927f57862b0d" 00:33:41.642 ], 00:33:41.642 "product_name": "Raid Volume", 00:33:41.642 "block_size": 4096, 00:33:41.642 "num_blocks": 7936, 00:33:41.642 "uuid": "1bf52488-bcc0-45f5-a7f9-927f57862b0d", 00:33:41.642 "assigned_rate_limits": { 00:33:41.642 "rw_ios_per_sec": 0, 00:33:41.642 "rw_mbytes_per_sec": 0, 00:33:41.642 "r_mbytes_per_sec": 0, 00:33:41.642 "w_mbytes_per_sec": 0 00:33:41.642 }, 00:33:41.642 "claimed": false, 00:33:41.642 "zoned": false, 00:33:41.642 "supported_io_types": { 00:33:41.642 "read": true, 00:33:41.642 "write": true, 00:33:41.642 "unmap": false, 00:33:41.642 "flush": false, 00:33:41.642 "reset": true, 00:33:41.642 "nvme_admin": false, 00:33:41.642 "nvme_io": false, 00:33:41.642 "nvme_io_md": false, 00:33:41.642 "write_zeroes": true, 00:33:41.642 "zcopy": false, 00:33:41.642 "get_zone_info": false, 00:33:41.642 "zone_management": false, 00:33:41.642 "zone_append": false, 00:33:41.642 "compare": false, 00:33:41.642 "compare_and_write": false, 00:33:41.642 "abort": false, 00:33:41.642 "seek_hole": false, 00:33:41.642 "seek_data": false, 00:33:41.642 "copy": false, 00:33:41.642 "nvme_iov_md": false 00:33:41.642 }, 00:33:41.642 "memory_domains": [ 00:33:41.642 { 00:33:41.642 "dma_device_id": "system", 00:33:41.642 "dma_device_type": 1 00:33:41.642 }, 00:33:41.642 { 00:33:41.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.642 "dma_device_type": 2 00:33:41.642 }, 00:33:41.642 { 00:33:41.642 "dma_device_id": "system", 00:33:41.642 "dma_device_type": 1 00:33:41.642 }, 00:33:41.642 { 00:33:41.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.642 "dma_device_type": 2 00:33:41.642 } 00:33:41.642 ], 00:33:41.642 "driver_specific": { 00:33:41.642 "raid": { 00:33:41.642 "uuid": "1bf52488-bcc0-45f5-a7f9-927f57862b0d", 00:33:41.642 "strip_size_kb": 0, 00:33:41.642 "state": "online", 00:33:41.642 "raid_level": "raid1", 00:33:41.642 "superblock": true, 00:33:41.642 "num_base_bdevs": 2, 00:33:41.642 "num_base_bdevs_discovered": 2, 00:33:41.642 "num_base_bdevs_operational": 2, 00:33:41.642 "base_bdevs_list": [ 00:33:41.642 { 00:33:41.642 "name": "BaseBdev1", 00:33:41.642 "uuid": "3078fbb9-c674-491e-bb44-bada907ba4be", 00:33:41.642 "is_configured": true, 00:33:41.642 "data_offset": 256, 00:33:41.642 "data_size": 7936 00:33:41.642 }, 00:33:41.642 { 00:33:41.642 "name": "BaseBdev2", 00:33:41.642 "uuid": "163a3364-98fd-4753-9a1d-c499cf2ba7f1", 00:33:41.642 "is_configured": true, 00:33:41.642 "data_offset": 256, 00:33:41.642 "data_size": 7936 00:33:41.642 } 00:33:41.642 ] 00:33:41.642 } 00:33:41.642 } 00:33:41.642 }' 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:41.642 BaseBdev2' 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:41.642 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 [2024-11-26 17:30:11.842668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.901 "name": "Existed_Raid", 00:33:41.901 "uuid": "1bf52488-bcc0-45f5-a7f9-927f57862b0d", 00:33:41.901 "strip_size_kb": 0, 00:33:41.901 "state": "online", 00:33:41.901 "raid_level": "raid1", 00:33:41.901 "superblock": true, 00:33:41.901 "num_base_bdevs": 2, 00:33:41.901 "num_base_bdevs_discovered": 1, 00:33:41.901 "num_base_bdevs_operational": 1, 00:33:41.901 "base_bdevs_list": [ 00:33:41.901 { 00:33:41.901 "name": null, 00:33:41.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.901 "is_configured": false, 00:33:41.901 "data_offset": 0, 00:33:41.901 "data_size": 7936 00:33:41.901 }, 00:33:41.901 { 00:33:41.901 "name": "BaseBdev2", 00:33:41.901 "uuid": "163a3364-98fd-4753-9a1d-c499cf2ba7f1", 00:33:41.901 "is_configured": true, 00:33:41.901 "data_offset": 256, 00:33:41.901 "data_size": 7936 00:33:41.901 } 00:33:41.901 ] 00:33:41.901 }' 00:33:41.901 17:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.901 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.470 [2024-11-26 17:30:12.403907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:42.470 [2024-11-26 17:30:12.404041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:42.470 [2024-11-26 17:30:12.511118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:42.470 [2024-11-26 17:30:12.511192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:42.470 [2024-11-26 17:30:12.511209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86104 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86104 ']' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86104 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.470 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86104 00:33:42.742 killing process with pid 86104 00:33:42.742 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:42.742 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:42.742 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86104' 00:33:42.742 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86104 00:33:42.742 [2024-11-26 17:30:12.614560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:42.742 17:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86104 00:33:42.742 [2024-11-26 17:30:12.633034] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:44.118 ************************************ 00:33:44.118 END TEST raid_state_function_test_sb_4k 00:33:44.118 ************************************ 00:33:44.118 17:30:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:33:44.118 00:33:44.118 real 0m5.051s 00:33:44.118 user 0m7.098s 00:33:44.118 sys 0m0.986s 00:33:44.118 17:30:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.118 17:30:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.118 17:30:13 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:33:44.118 17:30:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:44.118 17:30:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.118 17:30:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:44.118 ************************************ 00:33:44.118 START TEST raid_superblock_test_4k 00:33:44.119 ************************************ 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86347 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86347 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86347 ']' 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.119 17:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.119 [2024-11-26 17:30:14.019075] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:33:44.119 [2024-11-26 17:30:14.019216] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86347 ] 00:33:44.119 [2024-11-26 17:30:14.205699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.377 [2024-11-26 17:30:14.349148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.636 [2024-11-26 17:30:14.580701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:44.636 [2024-11-26 17:30:14.580749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.895 malloc1 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.895 [2024-11-26 17:30:14.912500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:44.895 [2024-11-26 17:30:14.912715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:44.895 [2024-11-26 17:30:14.912827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:44.895 [2024-11-26 17:30:14.912955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:44.895 [2024-11-26 17:30:14.915778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:44.895 [2024-11-26 17:30:14.915923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:44.895 pt1 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:44.895 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 malloc2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 [2024-11-26 17:30:14.973290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:44.896 [2024-11-26 17:30:14.973361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:44.896 [2024-11-26 17:30:14.973395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:44.896 [2024-11-26 17:30:14.973407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:44.896 [2024-11-26 17:30:14.976091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:44.896 [2024-11-26 17:30:14.976129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:44.896 pt2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 [2024-11-26 17:30:14.985342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:44.896 [2024-11-26 17:30:14.987658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:44.896 [2024-11-26 17:30:14.987843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:44.896 [2024-11-26 17:30:14.987862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:44.896 [2024-11-26 17:30:14.988157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:44.896 [2024-11-26 17:30:14.988337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:44.896 [2024-11-26 17:30:14.988357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:44.896 [2024-11-26 17:30:14.988558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.896 17:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.155 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.155 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.155 "name": "raid_bdev1", 00:33:45.155 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:45.155 "strip_size_kb": 0, 00:33:45.155 "state": "online", 00:33:45.155 "raid_level": "raid1", 00:33:45.155 "superblock": true, 00:33:45.155 "num_base_bdevs": 2, 00:33:45.155 "num_base_bdevs_discovered": 2, 00:33:45.155 "num_base_bdevs_operational": 2, 00:33:45.155 "base_bdevs_list": [ 00:33:45.155 { 00:33:45.155 "name": "pt1", 00:33:45.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:45.155 "is_configured": true, 00:33:45.155 "data_offset": 256, 00:33:45.155 "data_size": 7936 00:33:45.155 }, 00:33:45.155 { 00:33:45.155 "name": "pt2", 00:33:45.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:45.155 "is_configured": true, 00:33:45.155 "data_offset": 256, 00:33:45.155 "data_size": 7936 00:33:45.155 } 00:33:45.155 ] 00:33:45.155 }' 00:33:45.155 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.155 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.414 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.414 [2024-11-26 17:30:15.508921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:45.673 "name": "raid_bdev1", 00:33:45.673 "aliases": [ 00:33:45.673 "00f4638d-2960-405e-b1c9-6dcf24512cce" 00:33:45.673 ], 00:33:45.673 "product_name": "Raid Volume", 00:33:45.673 "block_size": 4096, 00:33:45.673 "num_blocks": 7936, 00:33:45.673 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:45.673 "assigned_rate_limits": { 00:33:45.673 "rw_ios_per_sec": 0, 00:33:45.673 "rw_mbytes_per_sec": 0, 00:33:45.673 "r_mbytes_per_sec": 0, 00:33:45.673 "w_mbytes_per_sec": 0 00:33:45.673 }, 00:33:45.673 "claimed": false, 00:33:45.673 "zoned": false, 00:33:45.673 "supported_io_types": { 00:33:45.673 "read": true, 00:33:45.673 "write": true, 00:33:45.673 "unmap": false, 00:33:45.673 "flush": false, 00:33:45.673 "reset": true, 00:33:45.673 "nvme_admin": false, 00:33:45.673 "nvme_io": false, 00:33:45.673 "nvme_io_md": false, 00:33:45.673 "write_zeroes": true, 00:33:45.673 "zcopy": false, 00:33:45.673 "get_zone_info": false, 00:33:45.673 "zone_management": false, 00:33:45.673 "zone_append": false, 00:33:45.673 "compare": false, 00:33:45.673 "compare_and_write": false, 00:33:45.673 "abort": false, 00:33:45.673 "seek_hole": false, 00:33:45.673 "seek_data": false, 00:33:45.673 "copy": false, 00:33:45.673 "nvme_iov_md": false 00:33:45.673 }, 00:33:45.673 "memory_domains": [ 00:33:45.673 { 00:33:45.673 "dma_device_id": "system", 00:33:45.673 "dma_device_type": 1 00:33:45.673 }, 00:33:45.673 { 00:33:45.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:45.673 "dma_device_type": 2 00:33:45.673 }, 00:33:45.673 { 00:33:45.673 "dma_device_id": "system", 00:33:45.673 "dma_device_type": 1 00:33:45.673 }, 00:33:45.673 { 00:33:45.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:45.673 "dma_device_type": 2 00:33:45.673 } 00:33:45.673 ], 00:33:45.673 "driver_specific": { 00:33:45.673 "raid": { 00:33:45.673 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:45.673 "strip_size_kb": 0, 00:33:45.673 "state": "online", 00:33:45.673 "raid_level": "raid1", 00:33:45.673 "superblock": true, 00:33:45.673 "num_base_bdevs": 2, 00:33:45.673 "num_base_bdevs_discovered": 2, 00:33:45.673 "num_base_bdevs_operational": 2, 00:33:45.673 "base_bdevs_list": [ 00:33:45.673 { 00:33:45.673 "name": "pt1", 00:33:45.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:45.673 "is_configured": true, 00:33:45.673 "data_offset": 256, 00:33:45.673 "data_size": 7936 00:33:45.673 }, 00:33:45.673 { 00:33:45.673 "name": "pt2", 00:33:45.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:45.673 "is_configured": true, 00:33:45.673 "data_offset": 256, 00:33:45.673 "data_size": 7936 00:33:45.673 } 00:33:45.673 ] 00:33:45.673 } 00:33:45.673 } 00:33:45.673 }' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:45.673 pt2' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:45.673 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.673 [2024-11-26 17:30:15.756589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=00f4638d-2960-405e-b1c9-6dcf24512cce 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 00f4638d-2960-405e-b1c9-6dcf24512cce ']' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 [2024-11-26 17:30:15.796168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:45.939 [2024-11-26 17:30:15.796202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:45.939 [2024-11-26 17:30:15.796314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:45.939 [2024-11-26 17:30:15.796384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:45.939 [2024-11-26 17:30:15.796401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 [2024-11-26 17:30:15.904040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:45.939 [2024-11-26 17:30:15.906491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:45.939 [2024-11-26 17:30:15.906599] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:45.939 [2024-11-26 17:30:15.906669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:45.939 [2024-11-26 17:30:15.906689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:45.939 [2024-11-26 17:30:15.906704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:45.939 request: 00:33:45.939 { 00:33:45.939 "name": "raid_bdev1", 00:33:45.939 "raid_level": "raid1", 00:33:45.939 "base_bdevs": [ 00:33:45.939 "malloc1", 00:33:45.939 "malloc2" 00:33:45.939 ], 00:33:45.939 "superblock": false, 00:33:45.939 "method": "bdev_raid_create", 00:33:45.939 "req_id": 1 00:33:45.939 } 00:33:45.939 Got JSON-RPC error response 00:33:45.939 response: 00:33:45.939 { 00:33:45.939 "code": -17, 00:33:45.939 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:45.939 } 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.939 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.939 [2024-11-26 17:30:15.971918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:45.939 [2024-11-26 17:30:15.971987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:45.939 [2024-11-26 17:30:15.972019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:45.940 [2024-11-26 17:30:15.972035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:45.940 [2024-11-26 17:30:15.974923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:45.940 [2024-11-26 17:30:15.974967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:45.940 [2024-11-26 17:30:15.975084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:45.940 [2024-11-26 17:30:15.975154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:45.940 pt1 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:45.940 17:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.940 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.940 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.940 "name": "raid_bdev1", 00:33:45.940 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:45.940 "strip_size_kb": 0, 00:33:45.940 "state": "configuring", 00:33:45.940 "raid_level": "raid1", 00:33:45.940 "superblock": true, 00:33:45.940 "num_base_bdevs": 2, 00:33:45.940 "num_base_bdevs_discovered": 1, 00:33:45.940 "num_base_bdevs_operational": 2, 00:33:45.940 "base_bdevs_list": [ 00:33:45.940 { 00:33:45.940 "name": "pt1", 00:33:45.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:45.940 "is_configured": true, 00:33:45.940 "data_offset": 256, 00:33:45.940 "data_size": 7936 00:33:45.940 }, 00:33:45.940 { 00:33:45.940 "name": null, 00:33:45.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:45.940 "is_configured": false, 00:33:45.940 "data_offset": 256, 00:33:45.940 "data_size": 7936 00:33:45.940 } 00:33:45.940 ] 00:33:45.940 }' 00:33:45.940 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.940 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.510 [2024-11-26 17:30:16.427398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:46.510 [2024-11-26 17:30:16.427490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:46.510 [2024-11-26 17:30:16.427528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:46.510 [2024-11-26 17:30:16.427545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:46.510 [2024-11-26 17:30:16.428109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:46.510 [2024-11-26 17:30:16.428133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:46.510 [2024-11-26 17:30:16.428232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:46.510 [2024-11-26 17:30:16.428265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:46.510 [2024-11-26 17:30:16.428391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:46.510 [2024-11-26 17:30:16.428405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:46.510 [2024-11-26 17:30:16.428737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:46.510 [2024-11-26 17:30:16.428899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:46.510 [2024-11-26 17:30:16.429040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:46.510 [2024-11-26 17:30:16.429248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:46.510 pt2 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.510 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.510 "name": "raid_bdev1", 00:33:46.510 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:46.511 "strip_size_kb": 0, 00:33:46.511 "state": "online", 00:33:46.511 "raid_level": "raid1", 00:33:46.511 "superblock": true, 00:33:46.511 "num_base_bdevs": 2, 00:33:46.511 "num_base_bdevs_discovered": 2, 00:33:46.511 "num_base_bdevs_operational": 2, 00:33:46.511 "base_bdevs_list": [ 00:33:46.511 { 00:33:46.511 "name": "pt1", 00:33:46.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:46.511 "is_configured": true, 00:33:46.511 "data_offset": 256, 00:33:46.511 "data_size": 7936 00:33:46.511 }, 00:33:46.511 { 00:33:46.511 "name": "pt2", 00:33:46.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:46.511 "is_configured": true, 00:33:46.511 "data_offset": 256, 00:33:46.511 "data_size": 7936 00:33:46.511 } 00:33:46.511 ] 00:33:46.511 }' 00:33:46.511 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.511 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:46.770 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.770 [2024-11-26 17:30:16.871068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:47.028 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.028 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:47.028 "name": "raid_bdev1", 00:33:47.028 "aliases": [ 00:33:47.028 "00f4638d-2960-405e-b1c9-6dcf24512cce" 00:33:47.028 ], 00:33:47.028 "product_name": "Raid Volume", 00:33:47.028 "block_size": 4096, 00:33:47.028 "num_blocks": 7936, 00:33:47.028 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:47.028 "assigned_rate_limits": { 00:33:47.028 "rw_ios_per_sec": 0, 00:33:47.028 "rw_mbytes_per_sec": 0, 00:33:47.028 "r_mbytes_per_sec": 0, 00:33:47.028 "w_mbytes_per_sec": 0 00:33:47.028 }, 00:33:47.028 "claimed": false, 00:33:47.028 "zoned": false, 00:33:47.028 "supported_io_types": { 00:33:47.028 "read": true, 00:33:47.028 "write": true, 00:33:47.028 "unmap": false, 00:33:47.028 "flush": false, 00:33:47.028 "reset": true, 00:33:47.028 "nvme_admin": false, 00:33:47.028 "nvme_io": false, 00:33:47.028 "nvme_io_md": false, 00:33:47.028 "write_zeroes": true, 00:33:47.028 "zcopy": false, 00:33:47.028 "get_zone_info": false, 00:33:47.028 "zone_management": false, 00:33:47.028 "zone_append": false, 00:33:47.028 "compare": false, 00:33:47.028 "compare_and_write": false, 00:33:47.028 "abort": false, 00:33:47.028 "seek_hole": false, 00:33:47.028 "seek_data": false, 00:33:47.028 "copy": false, 00:33:47.028 "nvme_iov_md": false 00:33:47.028 }, 00:33:47.028 "memory_domains": [ 00:33:47.028 { 00:33:47.028 "dma_device_id": "system", 00:33:47.028 "dma_device_type": 1 00:33:47.028 }, 00:33:47.028 { 00:33:47.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.028 "dma_device_type": 2 00:33:47.028 }, 00:33:47.029 { 00:33:47.029 "dma_device_id": "system", 00:33:47.029 "dma_device_type": 1 00:33:47.029 }, 00:33:47.029 { 00:33:47.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.029 "dma_device_type": 2 00:33:47.029 } 00:33:47.029 ], 00:33:47.029 "driver_specific": { 00:33:47.029 "raid": { 00:33:47.029 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:47.029 "strip_size_kb": 0, 00:33:47.029 "state": "online", 00:33:47.029 "raid_level": "raid1", 00:33:47.029 "superblock": true, 00:33:47.029 "num_base_bdevs": 2, 00:33:47.029 "num_base_bdevs_discovered": 2, 00:33:47.029 "num_base_bdevs_operational": 2, 00:33:47.029 "base_bdevs_list": [ 00:33:47.029 { 00:33:47.029 "name": "pt1", 00:33:47.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:47.029 "is_configured": true, 00:33:47.029 "data_offset": 256, 00:33:47.029 "data_size": 7936 00:33:47.029 }, 00:33:47.029 { 00:33:47.029 "name": "pt2", 00:33:47.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:47.029 "is_configured": true, 00:33:47.029 "data_offset": 256, 00:33:47.029 "data_size": 7936 00:33:47.029 } 00:33:47.029 ] 00:33:47.029 } 00:33:47.029 } 00:33:47.029 }' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:47.029 pt2' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 17:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 [2024-11-26 17:30:17.070815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 00f4638d-2960-405e-b1c9-6dcf24512cce '!=' 00f4638d-2960-405e-b1c9-6dcf24512cce ']' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 [2024-11-26 17:30:17.114584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.029 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.288 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.288 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.288 "name": "raid_bdev1", 00:33:47.288 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:47.288 "strip_size_kb": 0, 00:33:47.288 "state": "online", 00:33:47.288 "raid_level": "raid1", 00:33:47.288 "superblock": true, 00:33:47.288 "num_base_bdevs": 2, 00:33:47.288 "num_base_bdevs_discovered": 1, 00:33:47.288 "num_base_bdevs_operational": 1, 00:33:47.288 "base_bdevs_list": [ 00:33:47.288 { 00:33:47.288 "name": null, 00:33:47.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.288 "is_configured": false, 00:33:47.288 "data_offset": 0, 00:33:47.288 "data_size": 7936 00:33:47.288 }, 00:33:47.288 { 00:33:47.288 "name": "pt2", 00:33:47.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:47.288 "is_configured": true, 00:33:47.288 "data_offset": 256, 00:33:47.288 "data_size": 7936 00:33:47.288 } 00:33:47.288 ] 00:33:47.288 }' 00:33:47.288 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.288 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 [2024-11-26 17:30:17.537914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:47.546 [2024-11-26 17:30:17.537951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:47.546 [2024-11-26 17:30:17.538057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:47.546 [2024-11-26 17:30:17.538129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:47.546 [2024-11-26 17:30:17.538148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 [2024-11-26 17:30:17.609803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:47.546 [2024-11-26 17:30:17.609878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.546 [2024-11-26 17:30:17.609901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:47.546 [2024-11-26 17:30:17.609917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.546 [2024-11-26 17:30:17.612827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.546 [2024-11-26 17:30:17.612874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:47.546 [2024-11-26 17:30:17.612974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:47.546 [2024-11-26 17:30:17.613035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:47.546 [2024-11-26 17:30:17.613156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:47.546 [2024-11-26 17:30:17.613172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:47.546 [2024-11-26 17:30:17.613452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:47.546 [2024-11-26 17:30:17.613678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:47.546 [2024-11-26 17:30:17.613707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:47.546 pt2 00:33:47.546 [2024-11-26 17:30:17.613907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.546 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.805 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.805 "name": "raid_bdev1", 00:33:47.805 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:47.805 "strip_size_kb": 0, 00:33:47.805 "state": "online", 00:33:47.805 "raid_level": "raid1", 00:33:47.805 "superblock": true, 00:33:47.805 "num_base_bdevs": 2, 00:33:47.805 "num_base_bdevs_discovered": 1, 00:33:47.805 "num_base_bdevs_operational": 1, 00:33:47.805 "base_bdevs_list": [ 00:33:47.805 { 00:33:47.805 "name": null, 00:33:47.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.805 "is_configured": false, 00:33:47.805 "data_offset": 256, 00:33:47.805 "data_size": 7936 00:33:47.805 }, 00:33:47.805 { 00:33:47.805 "name": "pt2", 00:33:47.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:47.805 "is_configured": true, 00:33:47.805 "data_offset": 256, 00:33:47.805 "data_size": 7936 00:33:47.805 } 00:33:47.805 ] 00:33:47.805 }' 00:33:47.805 17:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.805 17:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.065 [2024-11-26 17:30:18.033813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:48.065 [2024-11-26 17:30:18.033853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:48.065 [2024-11-26 17:30:18.033949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:48.065 [2024-11-26 17:30:18.034011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:48.065 [2024-11-26 17:30:18.034023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.065 [2024-11-26 17:30:18.093815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:48.065 [2024-11-26 17:30:18.093892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:48.065 [2024-11-26 17:30:18.093920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:48.065 [2024-11-26 17:30:18.093934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:48.065 [2024-11-26 17:30:18.096806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:48.065 [2024-11-26 17:30:18.096850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:48.065 [2024-11-26 17:30:18.096958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:48.065 [2024-11-26 17:30:18.097014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:48.065 [2024-11-26 17:30:18.097203] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:48.065 [2024-11-26 17:30:18.097216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:48.065 [2024-11-26 17:30:18.097237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:48.065 [2024-11-26 17:30:18.097315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:48.065 [2024-11-26 17:30:18.097407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:48.065 [2024-11-26 17:30:18.097417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:48.065 [2024-11-26 17:30:18.097734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:48.065 [2024-11-26 17:30:18.097896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:48.065 [2024-11-26 17:30:18.097911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:48.065 [2024-11-26 17:30:18.098126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:48.065 pt1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.065 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:48.065 "name": "raid_bdev1", 00:33:48.065 "uuid": "00f4638d-2960-405e-b1c9-6dcf24512cce", 00:33:48.065 "strip_size_kb": 0, 00:33:48.066 "state": "online", 00:33:48.066 "raid_level": "raid1", 00:33:48.066 "superblock": true, 00:33:48.066 "num_base_bdevs": 2, 00:33:48.066 "num_base_bdevs_discovered": 1, 00:33:48.066 "num_base_bdevs_operational": 1, 00:33:48.066 "base_bdevs_list": [ 00:33:48.066 { 00:33:48.066 "name": null, 00:33:48.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.066 "is_configured": false, 00:33:48.066 "data_offset": 256, 00:33:48.066 "data_size": 7936 00:33:48.066 }, 00:33:48.066 { 00:33:48.066 "name": "pt2", 00:33:48.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:48.066 "is_configured": true, 00:33:48.066 "data_offset": 256, 00:33:48.066 "data_size": 7936 00:33:48.066 } 00:33:48.066 ] 00:33:48.066 }' 00:33:48.066 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:48.066 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.634 [2024-11-26 17:30:18.602088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 00f4638d-2960-405e-b1c9-6dcf24512cce '!=' 00f4638d-2960-405e-b1c9-6dcf24512cce ']' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86347 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86347 ']' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86347 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86347 00:33:48.634 killing process with pid 86347 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86347' 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86347 00:33:48.634 [2024-11-26 17:30:18.688218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:48.634 17:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86347 00:33:48.634 [2024-11-26 17:30:18.688361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:48.634 [2024-11-26 17:30:18.688434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:48.634 [2024-11-26 17:30:18.688456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:48.893 [2024-11-26 17:30:18.919355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:50.269 17:30:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:33:50.269 00:33:50.269 real 0m6.279s 00:33:50.269 user 0m9.377s 00:33:50.269 sys 0m1.282s 00:33:50.269 ************************************ 00:33:50.269 END TEST raid_superblock_test_4k 00:33:50.269 ************************************ 00:33:50.269 17:30:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.269 17:30:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.269 17:30:20 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:33:50.269 17:30:20 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:33:50.269 17:30:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:50.269 17:30:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.269 17:30:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:50.269 ************************************ 00:33:50.269 START TEST raid_rebuild_test_sb_4k 00:33:50.269 ************************************ 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:50.269 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86679 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86679 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86679 ']' 00:33:50.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.270 17:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.529 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:50.529 Zero copy mechanism will not be used. 00:33:50.529 [2024-11-26 17:30:20.382166] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:33:50.529 [2024-11-26 17:30:20.382309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86679 ] 00:33:50.529 [2024-11-26 17:30:20.565789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.790 [2024-11-26 17:30:20.712885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.049 [2024-11-26 17:30:20.938126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:51.049 [2024-11-26 17:30:20.938204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.309 BaseBdev1_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.309 [2024-11-26 17:30:21.292812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:51.309 [2024-11-26 17:30:21.292888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.309 [2024-11-26 17:30:21.292915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:51.309 [2024-11-26 17:30:21.292931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.309 [2024-11-26 17:30:21.295504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.309 [2024-11-26 17:30:21.295561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:51.309 BaseBdev1 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.309 BaseBdev2_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.309 [2024-11-26 17:30:21.352588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:51.309 [2024-11-26 17:30:21.352661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.309 [2024-11-26 17:30:21.352691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:51.309 [2024-11-26 17:30:21.352709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.309 [2024-11-26 17:30:21.355312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.309 [2024-11-26 17:30:21.355356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:51.309 BaseBdev2 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.309 spare_malloc 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.309 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.568 spare_delay 00:33:51.568 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.568 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:51.568 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.568 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.568 [2024-11-26 17:30:21.436478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:51.569 [2024-11-26 17:30:21.436711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.569 [2024-11-26 17:30:21.436746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:51.569 [2024-11-26 17:30:21.436762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.569 [2024-11-26 17:30:21.439473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.569 [2024-11-26 17:30:21.439533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:51.569 spare 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.569 [2024-11-26 17:30:21.448547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.569 [2024-11-26 17:30:21.450773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:51.569 [2024-11-26 17:30:21.450965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:51.569 [2024-11-26 17:30:21.450984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:51.569 [2024-11-26 17:30:21.451259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:51.569 [2024-11-26 17:30:21.451437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:51.569 [2024-11-26 17:30:21.451448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:51.569 [2024-11-26 17:30:21.451648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.569 "name": "raid_bdev1", 00:33:51.569 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:51.569 "strip_size_kb": 0, 00:33:51.569 "state": "online", 00:33:51.569 "raid_level": "raid1", 00:33:51.569 "superblock": true, 00:33:51.569 "num_base_bdevs": 2, 00:33:51.569 "num_base_bdevs_discovered": 2, 00:33:51.569 "num_base_bdevs_operational": 2, 00:33:51.569 "base_bdevs_list": [ 00:33:51.569 { 00:33:51.569 "name": "BaseBdev1", 00:33:51.569 "uuid": "2e254396-5f1f-5622-ae1e-4b19f028e981", 00:33:51.569 "is_configured": true, 00:33:51.569 "data_offset": 256, 00:33:51.569 "data_size": 7936 00:33:51.569 }, 00:33:51.569 { 00:33:51.569 "name": "BaseBdev2", 00:33:51.569 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:51.569 "is_configured": true, 00:33:51.569 "data_offset": 256, 00:33:51.569 "data_size": 7936 00:33:51.569 } 00:33:51.569 ] 00:33:51.569 }' 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.569 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.827 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:51.827 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:51.827 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.827 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.827 [2024-11-26 17:30:21.872219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:51.827 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:51.828 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.097 17:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:52.097 [2024-11-26 17:30:22.163715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:52.097 /dev/nbd0 00:33:52.383 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.384 1+0 records in 00:33:52.384 1+0 records out 00:33:52.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440105 s, 9.3 MB/s 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:52.384 17:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:33:52.950 7936+0 records in 00:33:52.950 7936+0 records out 00:33:52.950 32505856 bytes (33 MB, 31 MiB) copied, 0.761354 s, 42.7 MB/s 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:52.950 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:53.208 [2024-11-26 17:30:23.250146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.208 [2024-11-26 17:30:23.270250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:53.208 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.209 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.467 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.467 "name": "raid_bdev1", 00:33:53.467 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:53.467 "strip_size_kb": 0, 00:33:53.467 "state": "online", 00:33:53.467 "raid_level": "raid1", 00:33:53.467 "superblock": true, 00:33:53.467 "num_base_bdevs": 2, 00:33:53.467 "num_base_bdevs_discovered": 1, 00:33:53.467 "num_base_bdevs_operational": 1, 00:33:53.467 "base_bdevs_list": [ 00:33:53.467 { 00:33:53.467 "name": null, 00:33:53.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.467 "is_configured": false, 00:33:53.467 "data_offset": 0, 00:33:53.467 "data_size": 7936 00:33:53.467 }, 00:33:53.467 { 00:33:53.467 "name": "BaseBdev2", 00:33:53.467 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:53.467 "is_configured": true, 00:33:53.467 "data_offset": 256, 00:33:53.467 "data_size": 7936 00:33:53.467 } 00:33:53.467 ] 00:33:53.467 }' 00:33:53.467 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.467 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.727 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:53.727 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.727 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.727 [2024-11-26 17:30:23.677918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:53.727 [2024-11-26 17:30:23.697807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:33:53.727 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.727 17:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:53.727 [2024-11-26 17:30:23.700350] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:54.664 "name": "raid_bdev1", 00:33:54.664 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:54.664 "strip_size_kb": 0, 00:33:54.664 "state": "online", 00:33:54.664 "raid_level": "raid1", 00:33:54.664 "superblock": true, 00:33:54.664 "num_base_bdevs": 2, 00:33:54.664 "num_base_bdevs_discovered": 2, 00:33:54.664 "num_base_bdevs_operational": 2, 00:33:54.664 "process": { 00:33:54.664 "type": "rebuild", 00:33:54.664 "target": "spare", 00:33:54.664 "progress": { 00:33:54.664 "blocks": 2560, 00:33:54.664 "percent": 32 00:33:54.664 } 00:33:54.664 }, 00:33:54.664 "base_bdevs_list": [ 00:33:54.664 { 00:33:54.664 "name": "spare", 00:33:54.664 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:54.664 "is_configured": true, 00:33:54.664 "data_offset": 256, 00:33:54.664 "data_size": 7936 00:33:54.664 }, 00:33:54.664 { 00:33:54.664 "name": "BaseBdev2", 00:33:54.664 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:54.664 "is_configured": true, 00:33:54.664 "data_offset": 256, 00:33:54.664 "data_size": 7936 00:33:54.664 } 00:33:54.664 ] 00:33:54.664 }' 00:33:54.664 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:54.923 [2024-11-26 17:30:24.847903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:54.923 [2024-11-26 17:30:24.907586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:54.923 [2024-11-26 17:30:24.907677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:54.923 [2024-11-26 17:30:24.907698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:54.923 [2024-11-26 17:30:24.907711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.923 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:54.924 17:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.924 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.924 "name": "raid_bdev1", 00:33:54.924 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:54.924 "strip_size_kb": 0, 00:33:54.924 "state": "online", 00:33:54.924 "raid_level": "raid1", 00:33:54.924 "superblock": true, 00:33:54.924 "num_base_bdevs": 2, 00:33:54.924 "num_base_bdevs_discovered": 1, 00:33:54.924 "num_base_bdevs_operational": 1, 00:33:54.924 "base_bdevs_list": [ 00:33:54.924 { 00:33:54.924 "name": null, 00:33:54.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.924 "is_configured": false, 00:33:54.924 "data_offset": 0, 00:33:54.924 "data_size": 7936 00:33:54.924 }, 00:33:54.924 { 00:33:54.924 "name": "BaseBdev2", 00:33:54.924 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:54.924 "is_configured": true, 00:33:54.924 "data_offset": 256, 00:33:54.924 "data_size": 7936 00:33:54.924 } 00:33:54.924 ] 00:33:54.924 }' 00:33:54.924 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.924 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:55.491 "name": "raid_bdev1", 00:33:55.491 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:55.491 "strip_size_kb": 0, 00:33:55.491 "state": "online", 00:33:55.491 "raid_level": "raid1", 00:33:55.491 "superblock": true, 00:33:55.491 "num_base_bdevs": 2, 00:33:55.491 "num_base_bdevs_discovered": 1, 00:33:55.491 "num_base_bdevs_operational": 1, 00:33:55.491 "base_bdevs_list": [ 00:33:55.491 { 00:33:55.491 "name": null, 00:33:55.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.491 "is_configured": false, 00:33:55.491 "data_offset": 0, 00:33:55.491 "data_size": 7936 00:33:55.491 }, 00:33:55.491 { 00:33:55.491 "name": "BaseBdev2", 00:33:55.491 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:55.491 "is_configured": true, 00:33:55.491 "data_offset": 256, 00:33:55.491 "data_size": 7936 00:33:55.491 } 00:33:55.491 ] 00:33:55.491 }' 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.491 [2024-11-26 17:30:25.571512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:55.491 [2024-11-26 17:30:25.590753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.491 17:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:55.491 [2024-11-26 17:30:25.593561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.871 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:56.872 "name": "raid_bdev1", 00:33:56.872 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:56.872 "strip_size_kb": 0, 00:33:56.872 "state": "online", 00:33:56.872 "raid_level": "raid1", 00:33:56.872 "superblock": true, 00:33:56.872 "num_base_bdevs": 2, 00:33:56.872 "num_base_bdevs_discovered": 2, 00:33:56.872 "num_base_bdevs_operational": 2, 00:33:56.872 "process": { 00:33:56.872 "type": "rebuild", 00:33:56.872 "target": "spare", 00:33:56.872 "progress": { 00:33:56.872 "blocks": 2560, 00:33:56.872 "percent": 32 00:33:56.872 } 00:33:56.872 }, 00:33:56.872 "base_bdevs_list": [ 00:33:56.872 { 00:33:56.872 "name": "spare", 00:33:56.872 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:56.872 "is_configured": true, 00:33:56.872 "data_offset": 256, 00:33:56.872 "data_size": 7936 00:33:56.872 }, 00:33:56.872 { 00:33:56.872 "name": "BaseBdev2", 00:33:56.872 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:56.872 "is_configured": true, 00:33:56.872 "data_offset": 256, 00:33:56.872 "data_size": 7936 00:33:56.872 } 00:33:56.872 ] 00:33:56.872 }' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:56.872 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=692 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:56.872 "name": "raid_bdev1", 00:33:56.872 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:56.872 "strip_size_kb": 0, 00:33:56.872 "state": "online", 00:33:56.872 "raid_level": "raid1", 00:33:56.872 "superblock": true, 00:33:56.872 "num_base_bdevs": 2, 00:33:56.872 "num_base_bdevs_discovered": 2, 00:33:56.872 "num_base_bdevs_operational": 2, 00:33:56.872 "process": { 00:33:56.872 "type": "rebuild", 00:33:56.872 "target": "spare", 00:33:56.872 "progress": { 00:33:56.872 "blocks": 2816, 00:33:56.872 "percent": 35 00:33:56.872 } 00:33:56.872 }, 00:33:56.872 "base_bdevs_list": [ 00:33:56.872 { 00:33:56.872 "name": "spare", 00:33:56.872 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:56.872 "is_configured": true, 00:33:56.872 "data_offset": 256, 00:33:56.872 "data_size": 7936 00:33:56.872 }, 00:33:56.872 { 00:33:56.872 "name": "BaseBdev2", 00:33:56.872 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:56.872 "is_configured": true, 00:33:56.872 "data_offset": 256, 00:33:56.872 "data_size": 7936 00:33:56.872 } 00:33:56.872 ] 00:33:56.872 }' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:56.872 17:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:57.820 "name": "raid_bdev1", 00:33:57.820 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:57.820 "strip_size_kb": 0, 00:33:57.820 "state": "online", 00:33:57.820 "raid_level": "raid1", 00:33:57.820 "superblock": true, 00:33:57.820 "num_base_bdevs": 2, 00:33:57.820 "num_base_bdevs_discovered": 2, 00:33:57.820 "num_base_bdevs_operational": 2, 00:33:57.820 "process": { 00:33:57.820 "type": "rebuild", 00:33:57.820 "target": "spare", 00:33:57.820 "progress": { 00:33:57.820 "blocks": 5632, 00:33:57.820 "percent": 70 00:33:57.820 } 00:33:57.820 }, 00:33:57.820 "base_bdevs_list": [ 00:33:57.820 { 00:33:57.820 "name": "spare", 00:33:57.820 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:57.820 "is_configured": true, 00:33:57.820 "data_offset": 256, 00:33:57.820 "data_size": 7936 00:33:57.820 }, 00:33:57.820 { 00:33:57.820 "name": "BaseBdev2", 00:33:57.820 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:57.820 "is_configured": true, 00:33:57.820 "data_offset": 256, 00:33:57.820 "data_size": 7936 00:33:57.820 } 00:33:57.820 ] 00:33:57.820 }' 00:33:57.820 17:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:58.079 17:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:58.079 17:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:58.079 17:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:58.079 17:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:58.645 [2024-11-26 17:30:28.712862] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:58.645 [2024-11-26 17:30:28.712964] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:58.645 [2024-11-26 17:30:28.713134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.212 "name": "raid_bdev1", 00:33:59.212 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:59.212 "strip_size_kb": 0, 00:33:59.212 "state": "online", 00:33:59.212 "raid_level": "raid1", 00:33:59.212 "superblock": true, 00:33:59.212 "num_base_bdevs": 2, 00:33:59.212 "num_base_bdevs_discovered": 2, 00:33:59.212 "num_base_bdevs_operational": 2, 00:33:59.212 "base_bdevs_list": [ 00:33:59.212 { 00:33:59.212 "name": "spare", 00:33:59.212 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:59.212 "is_configured": true, 00:33:59.212 "data_offset": 256, 00:33:59.212 "data_size": 7936 00:33:59.212 }, 00:33:59.212 { 00:33:59.212 "name": "BaseBdev2", 00:33:59.212 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:59.212 "is_configured": true, 00:33:59.212 "data_offset": 256, 00:33:59.212 "data_size": 7936 00:33:59.212 } 00:33:59.212 ] 00:33:59.212 }' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.212 "name": "raid_bdev1", 00:33:59.212 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:59.212 "strip_size_kb": 0, 00:33:59.212 "state": "online", 00:33:59.212 "raid_level": "raid1", 00:33:59.212 "superblock": true, 00:33:59.212 "num_base_bdevs": 2, 00:33:59.212 "num_base_bdevs_discovered": 2, 00:33:59.212 "num_base_bdevs_operational": 2, 00:33:59.212 "base_bdevs_list": [ 00:33:59.212 { 00:33:59.212 "name": "spare", 00:33:59.212 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:59.212 "is_configured": true, 00:33:59.212 "data_offset": 256, 00:33:59.212 "data_size": 7936 00:33:59.212 }, 00:33:59.212 { 00:33:59.212 "name": "BaseBdev2", 00:33:59.212 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:59.212 "is_configured": true, 00:33:59.212 "data_offset": 256, 00:33:59.212 "data_size": 7936 00:33:59.212 } 00:33:59.212 ] 00:33:59.212 }' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:59.212 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:59.470 "name": "raid_bdev1", 00:33:59.470 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:33:59.470 "strip_size_kb": 0, 00:33:59.470 "state": "online", 00:33:59.470 "raid_level": "raid1", 00:33:59.470 "superblock": true, 00:33:59.470 "num_base_bdevs": 2, 00:33:59.470 "num_base_bdevs_discovered": 2, 00:33:59.470 "num_base_bdevs_operational": 2, 00:33:59.470 "base_bdevs_list": [ 00:33:59.470 { 00:33:59.470 "name": "spare", 00:33:59.470 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:33:59.470 "is_configured": true, 00:33:59.470 "data_offset": 256, 00:33:59.470 "data_size": 7936 00:33:59.470 }, 00:33:59.470 { 00:33:59.470 "name": "BaseBdev2", 00:33:59.470 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:33:59.470 "is_configured": true, 00:33:59.470 "data_offset": 256, 00:33:59.470 "data_size": 7936 00:33:59.470 } 00:33:59.470 ] 00:33:59.470 }' 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:59.470 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.728 [2024-11-26 17:30:29.725847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:59.728 [2024-11-26 17:30:29.725886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:59.728 [2024-11-26 17:30:29.725990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:59.728 [2024-11-26 17:30:29.726071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:59.728 [2024-11-26 17:30:29.726088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:59.728 17:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:59.987 /dev/nbd0 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:59.987 1+0 records in 00:33:59.987 1+0 records out 00:33:59.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317047 s, 12.9 MB/s 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:59.987 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:00.247 /dev/nbd1 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:00.247 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:00.247 1+0 records in 00:34:00.247 1+0 records out 00:34:00.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598967 s, 6.8 MB/s 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:00.506 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:00.764 17:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.023 [2024-11-26 17:30:31.121798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:01.023 [2024-11-26 17:30:31.121878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.023 [2024-11-26 17:30:31.121915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:01.023 [2024-11-26 17:30:31.121928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.023 [2024-11-26 17:30:31.124875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.023 [2024-11-26 17:30:31.124921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:01.023 [2024-11-26 17:30:31.125043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:01.023 [2024-11-26 17:30:31.125103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:01.023 [2024-11-26 17:30:31.125279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:01.023 spare 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.023 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.309 [2024-11-26 17:30:31.225238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:01.309 [2024-11-26 17:30:31.225317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:01.309 [2024-11-26 17:30:31.225822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:34:01.309 [2024-11-26 17:30:31.226081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:01.309 [2024-11-26 17:30:31.226095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:01.309 [2024-11-26 17:30:31.226366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.309 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:01.309 "name": "raid_bdev1", 00:34:01.309 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:01.309 "strip_size_kb": 0, 00:34:01.309 "state": "online", 00:34:01.309 "raid_level": "raid1", 00:34:01.309 "superblock": true, 00:34:01.309 "num_base_bdevs": 2, 00:34:01.309 "num_base_bdevs_discovered": 2, 00:34:01.309 "num_base_bdevs_operational": 2, 00:34:01.309 "base_bdevs_list": [ 00:34:01.310 { 00:34:01.310 "name": "spare", 00:34:01.310 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:34:01.310 "is_configured": true, 00:34:01.310 "data_offset": 256, 00:34:01.310 "data_size": 7936 00:34:01.310 }, 00:34:01.310 { 00:34:01.310 "name": "BaseBdev2", 00:34:01.310 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:01.310 "is_configured": true, 00:34:01.310 "data_offset": 256, 00:34:01.310 "data_size": 7936 00:34:01.310 } 00:34:01.310 ] 00:34:01.310 }' 00:34:01.310 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:01.310 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.879 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:01.880 "name": "raid_bdev1", 00:34:01.880 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:01.880 "strip_size_kb": 0, 00:34:01.880 "state": "online", 00:34:01.880 "raid_level": "raid1", 00:34:01.880 "superblock": true, 00:34:01.880 "num_base_bdevs": 2, 00:34:01.880 "num_base_bdevs_discovered": 2, 00:34:01.880 "num_base_bdevs_operational": 2, 00:34:01.880 "base_bdevs_list": [ 00:34:01.880 { 00:34:01.880 "name": "spare", 00:34:01.880 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:34:01.880 "is_configured": true, 00:34:01.880 "data_offset": 256, 00:34:01.880 "data_size": 7936 00:34:01.880 }, 00:34:01.880 { 00:34:01.880 "name": "BaseBdev2", 00:34:01.880 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:01.880 "is_configured": true, 00:34:01.880 "data_offset": 256, 00:34:01.880 "data_size": 7936 00:34:01.880 } 00:34:01.880 ] 00:34:01.880 }' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.880 [2024-11-26 17:30:31.913899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:01.880 "name": "raid_bdev1", 00:34:01.880 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:01.880 "strip_size_kb": 0, 00:34:01.880 "state": "online", 00:34:01.880 "raid_level": "raid1", 00:34:01.880 "superblock": true, 00:34:01.880 "num_base_bdevs": 2, 00:34:01.880 "num_base_bdevs_discovered": 1, 00:34:01.880 "num_base_bdevs_operational": 1, 00:34:01.880 "base_bdevs_list": [ 00:34:01.880 { 00:34:01.880 "name": null, 00:34:01.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.880 "is_configured": false, 00:34:01.880 "data_offset": 0, 00:34:01.880 "data_size": 7936 00:34:01.880 }, 00:34:01.880 { 00:34:01.880 "name": "BaseBdev2", 00:34:01.880 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:01.880 "is_configured": true, 00:34:01.880 "data_offset": 256, 00:34:01.880 "data_size": 7936 00:34:01.880 } 00:34:01.880 ] 00:34:01.880 }' 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:01.880 17:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.449 17:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:02.449 17:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.449 17:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.449 [2024-11-26 17:30:32.349911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:02.449 [2024-11-26 17:30:32.350320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:02.449 [2024-11-26 17:30:32.350357] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:02.449 [2024-11-26 17:30:32.350407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:02.449 [2024-11-26 17:30:32.368883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:34:02.449 17:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.449 17:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:02.449 [2024-11-26 17:30:32.371587] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:03.385 "name": "raid_bdev1", 00:34:03.385 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:03.385 "strip_size_kb": 0, 00:34:03.385 "state": "online", 00:34:03.385 "raid_level": "raid1", 00:34:03.385 "superblock": true, 00:34:03.385 "num_base_bdevs": 2, 00:34:03.385 "num_base_bdevs_discovered": 2, 00:34:03.385 "num_base_bdevs_operational": 2, 00:34:03.385 "process": { 00:34:03.385 "type": "rebuild", 00:34:03.385 "target": "spare", 00:34:03.385 "progress": { 00:34:03.385 "blocks": 2560, 00:34:03.385 "percent": 32 00:34:03.385 } 00:34:03.385 }, 00:34:03.385 "base_bdevs_list": [ 00:34:03.385 { 00:34:03.385 "name": "spare", 00:34:03.385 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:34:03.385 "is_configured": true, 00:34:03.385 "data_offset": 256, 00:34:03.385 "data_size": 7936 00:34:03.385 }, 00:34:03.385 { 00:34:03.385 "name": "BaseBdev2", 00:34:03.385 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:03.385 "is_configured": true, 00:34:03.385 "data_offset": 256, 00:34:03.385 "data_size": 7936 00:34:03.385 } 00:34:03.385 ] 00:34:03.385 }' 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:03.385 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.645 [2024-11-26 17:30:33.527215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:03.645 [2024-11-26 17:30:33.578713] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:03.645 [2024-11-26 17:30:33.578824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:03.645 [2024-11-26 17:30:33.578844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:03.645 [2024-11-26 17:30:33.578857] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.645 "name": "raid_bdev1", 00:34:03.645 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:03.645 "strip_size_kb": 0, 00:34:03.645 "state": "online", 00:34:03.645 "raid_level": "raid1", 00:34:03.645 "superblock": true, 00:34:03.645 "num_base_bdevs": 2, 00:34:03.645 "num_base_bdevs_discovered": 1, 00:34:03.645 "num_base_bdevs_operational": 1, 00:34:03.645 "base_bdevs_list": [ 00:34:03.645 { 00:34:03.645 "name": null, 00:34:03.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.645 "is_configured": false, 00:34:03.645 "data_offset": 0, 00:34:03.645 "data_size": 7936 00:34:03.645 }, 00:34:03.645 { 00:34:03.645 "name": "BaseBdev2", 00:34:03.645 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:03.645 "is_configured": true, 00:34:03.645 "data_offset": 256, 00:34:03.645 "data_size": 7936 00:34:03.645 } 00:34:03.645 ] 00:34:03.645 }' 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.645 17:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:04.215 17:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:04.215 17:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.215 17:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:04.215 [2024-11-26 17:30:34.050803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:04.215 [2024-11-26 17:30:34.051037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.215 [2024-11-26 17:30:34.051076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:04.215 [2024-11-26 17:30:34.051094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.215 [2024-11-26 17:30:34.051721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.215 [2024-11-26 17:30:34.051747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:04.215 [2024-11-26 17:30:34.051880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:04.215 [2024-11-26 17:30:34.051900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:04.215 [2024-11-26 17:30:34.051913] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:04.215 [2024-11-26 17:30:34.051944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:04.215 [2024-11-26 17:30:34.070132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:34:04.215 spare 00:34:04.215 17:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.215 17:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:04.215 [2024-11-26 17:30:34.072617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:05.154 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:05.154 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:05.155 "name": "raid_bdev1", 00:34:05.155 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:05.155 "strip_size_kb": 0, 00:34:05.155 "state": "online", 00:34:05.155 "raid_level": "raid1", 00:34:05.155 "superblock": true, 00:34:05.155 "num_base_bdevs": 2, 00:34:05.155 "num_base_bdevs_discovered": 2, 00:34:05.155 "num_base_bdevs_operational": 2, 00:34:05.155 "process": { 00:34:05.155 "type": "rebuild", 00:34:05.155 "target": "spare", 00:34:05.155 "progress": { 00:34:05.155 "blocks": 2560, 00:34:05.155 "percent": 32 00:34:05.155 } 00:34:05.155 }, 00:34:05.155 "base_bdevs_list": [ 00:34:05.155 { 00:34:05.155 "name": "spare", 00:34:05.155 "uuid": "4ee5e0b3-2536-5701-8d11-04c8049fb27d", 00:34:05.155 "is_configured": true, 00:34:05.155 "data_offset": 256, 00:34:05.155 "data_size": 7936 00:34:05.155 }, 00:34:05.155 { 00:34:05.155 "name": "BaseBdev2", 00:34:05.155 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:05.155 "is_configured": true, 00:34:05.155 "data_offset": 256, 00:34:05.155 "data_size": 7936 00:34:05.155 } 00:34:05.155 ] 00:34:05.155 }' 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.155 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.155 [2024-11-26 17:30:35.212296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:05.414 [2024-11-26 17:30:35.279811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:05.414 [2024-11-26 17:30:35.280144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.414 [2024-11-26 17:30:35.280255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:05.414 [2024-11-26 17:30:35.280340] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:05.414 "name": "raid_bdev1", 00:34:05.414 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:05.414 "strip_size_kb": 0, 00:34:05.414 "state": "online", 00:34:05.414 "raid_level": "raid1", 00:34:05.414 "superblock": true, 00:34:05.414 "num_base_bdevs": 2, 00:34:05.414 "num_base_bdevs_discovered": 1, 00:34:05.414 "num_base_bdevs_operational": 1, 00:34:05.414 "base_bdevs_list": [ 00:34:05.414 { 00:34:05.414 "name": null, 00:34:05.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.414 "is_configured": false, 00:34:05.414 "data_offset": 0, 00:34:05.414 "data_size": 7936 00:34:05.414 }, 00:34:05.414 { 00:34:05.414 "name": "BaseBdev2", 00:34:05.414 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:05.414 "is_configured": true, 00:34:05.414 "data_offset": 256, 00:34:05.414 "data_size": 7936 00:34:05.414 } 00:34:05.414 ] 00:34:05.414 }' 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:05.414 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.674 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:05.934 "name": "raid_bdev1", 00:34:05.934 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:05.934 "strip_size_kb": 0, 00:34:05.934 "state": "online", 00:34:05.934 "raid_level": "raid1", 00:34:05.934 "superblock": true, 00:34:05.934 "num_base_bdevs": 2, 00:34:05.934 "num_base_bdevs_discovered": 1, 00:34:05.934 "num_base_bdevs_operational": 1, 00:34:05.934 "base_bdevs_list": [ 00:34:05.934 { 00:34:05.934 "name": null, 00:34:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.934 "is_configured": false, 00:34:05.934 "data_offset": 0, 00:34:05.934 "data_size": 7936 00:34:05.934 }, 00:34:05.934 { 00:34:05.934 "name": "BaseBdev2", 00:34:05.934 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:05.934 "is_configured": true, 00:34:05.934 "data_offset": 256, 00:34:05.934 "data_size": 7936 00:34:05.934 } 00:34:05.934 ] 00:34:05.934 }' 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.934 [2024-11-26 17:30:35.896723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:05.934 [2024-11-26 17:30:35.896978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:05.934 [2024-11-26 17:30:35.897027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:05.934 [2024-11-26 17:30:35.897054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:05.934 [2024-11-26 17:30:35.897614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:05.934 [2024-11-26 17:30:35.897639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:05.934 [2024-11-26 17:30:35.897755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:05.934 [2024-11-26 17:30:35.897773] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:05.934 [2024-11-26 17:30:35.897789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:05.934 [2024-11-26 17:30:35.897803] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:05.934 BaseBdev1 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.934 17:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:06.911 "name": "raid_bdev1", 00:34:06.911 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:06.911 "strip_size_kb": 0, 00:34:06.911 "state": "online", 00:34:06.911 "raid_level": "raid1", 00:34:06.911 "superblock": true, 00:34:06.911 "num_base_bdevs": 2, 00:34:06.911 "num_base_bdevs_discovered": 1, 00:34:06.911 "num_base_bdevs_operational": 1, 00:34:06.911 "base_bdevs_list": [ 00:34:06.911 { 00:34:06.911 "name": null, 00:34:06.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.911 "is_configured": false, 00:34:06.911 "data_offset": 0, 00:34:06.911 "data_size": 7936 00:34:06.911 }, 00:34:06.911 { 00:34:06.911 "name": "BaseBdev2", 00:34:06.911 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:06.911 "is_configured": true, 00:34:06.911 "data_offset": 256, 00:34:06.911 "data_size": 7936 00:34:06.911 } 00:34:06.911 ] 00:34:06.911 }' 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:06.911 17:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.480 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:07.480 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:07.481 "name": "raid_bdev1", 00:34:07.481 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:07.481 "strip_size_kb": 0, 00:34:07.481 "state": "online", 00:34:07.481 "raid_level": "raid1", 00:34:07.481 "superblock": true, 00:34:07.481 "num_base_bdevs": 2, 00:34:07.481 "num_base_bdevs_discovered": 1, 00:34:07.481 "num_base_bdevs_operational": 1, 00:34:07.481 "base_bdevs_list": [ 00:34:07.481 { 00:34:07.481 "name": null, 00:34:07.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.481 "is_configured": false, 00:34:07.481 "data_offset": 0, 00:34:07.481 "data_size": 7936 00:34:07.481 }, 00:34:07.481 { 00:34:07.481 "name": "BaseBdev2", 00:34:07.481 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:07.481 "is_configured": true, 00:34:07.481 "data_offset": 256, 00:34:07.481 "data_size": 7936 00:34:07.481 } 00:34:07.481 ] 00:34:07.481 }' 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.481 [2024-11-26 17:30:37.526717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:07.481 [2024-11-26 17:30:37.526927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:07.481 [2024-11-26 17:30:37.526957] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:07.481 request: 00:34:07.481 { 00:34:07.481 "base_bdev": "BaseBdev1", 00:34:07.481 "raid_bdev": "raid_bdev1", 00:34:07.481 "method": "bdev_raid_add_base_bdev", 00:34:07.481 "req_id": 1 00:34:07.481 } 00:34:07.481 Got JSON-RPC error response 00:34:07.481 response: 00:34:07.481 { 00:34:07.481 "code": -22, 00:34:07.481 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:07.481 } 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:07.481 17:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:08.862 "name": "raid_bdev1", 00:34:08.862 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:08.862 "strip_size_kb": 0, 00:34:08.862 "state": "online", 00:34:08.862 "raid_level": "raid1", 00:34:08.862 "superblock": true, 00:34:08.862 "num_base_bdevs": 2, 00:34:08.862 "num_base_bdevs_discovered": 1, 00:34:08.862 "num_base_bdevs_operational": 1, 00:34:08.862 "base_bdevs_list": [ 00:34:08.862 { 00:34:08.862 "name": null, 00:34:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.862 "is_configured": false, 00:34:08.862 "data_offset": 0, 00:34:08.862 "data_size": 7936 00:34:08.862 }, 00:34:08.862 { 00:34:08.862 "name": "BaseBdev2", 00:34:08.862 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:08.862 "is_configured": true, 00:34:08.862 "data_offset": 256, 00:34:08.862 "data_size": 7936 00:34:08.862 } 00:34:08.862 ] 00:34:08.862 }' 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.862 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:09.121 17:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:09.121 "name": "raid_bdev1", 00:34:09.121 "uuid": "a7f1b98c-999b-46f7-b097-8105d1d6100b", 00:34:09.121 "strip_size_kb": 0, 00:34:09.121 "state": "online", 00:34:09.121 "raid_level": "raid1", 00:34:09.121 "superblock": true, 00:34:09.121 "num_base_bdevs": 2, 00:34:09.121 "num_base_bdevs_discovered": 1, 00:34:09.121 "num_base_bdevs_operational": 1, 00:34:09.121 "base_bdevs_list": [ 00:34:09.121 { 00:34:09.121 "name": null, 00:34:09.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.121 "is_configured": false, 00:34:09.121 "data_offset": 0, 00:34:09.121 "data_size": 7936 00:34:09.121 }, 00:34:09.121 { 00:34:09.121 "name": "BaseBdev2", 00:34:09.121 "uuid": "2d5d82c1-3a8d-5665-9560-1bbe6d9fb6bc", 00:34:09.121 "is_configured": true, 00:34:09.121 "data_offset": 256, 00:34:09.121 "data_size": 7936 00:34:09.121 } 00:34:09.121 ] 00:34:09.121 }' 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86679 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86679 ']' 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86679 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.121 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86679 00:34:09.122 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.122 killing process with pid 86679 00:34:09.122 Received shutdown signal, test time was about 60.000000 seconds 00:34:09.122 00:34:09.122 Latency(us) 00:34:09.122 [2024-11-26T17:30:39.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.122 [2024-11-26T17:30:39.236Z] =================================================================================================================== 00:34:09.122 [2024-11-26T17:30:39.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:09.122 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.122 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86679' 00:34:09.122 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86679 00:34:09.122 [2024-11-26 17:30:39.153195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.122 17:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86679 00:34:09.122 [2024-11-26 17:30:39.153357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.122 [2024-11-26 17:30:39.153428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.122 [2024-11-26 17:30:39.153445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:09.381 [2024-11-26 17:30:39.469486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:10.761 17:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:34:10.761 00:34:10.761 real 0m20.385s 00:34:10.761 user 0m26.280s 00:34:10.761 sys 0m3.232s 00:34:10.761 17:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.761 17:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:10.761 ************************************ 00:34:10.761 END TEST raid_rebuild_test_sb_4k 00:34:10.761 ************************************ 00:34:10.761 17:30:40 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:34:10.761 17:30:40 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:34:10.761 17:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:10.761 17:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.761 17:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:10.761 ************************************ 00:34:10.761 START TEST raid_state_function_test_sb_md_separate 00:34:10.761 ************************************ 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:10.761 Process raid pid: 87366 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87366 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87366' 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87366 00:34:10.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87366 ']' 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.761 17:30:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:10.761 [2024-11-26 17:30:40.844542] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:10.761 [2024-11-26 17:30:40.844876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.020 [2024-11-26 17:30:41.027578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.280 [2024-11-26 17:30:41.172682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.539 [2024-11-26 17:30:41.415633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.539 [2024-11-26 17:30:41.415929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.797 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.797 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:34:11.797 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:11.797 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.797 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 [2024-11-26 17:30:41.712579] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:11.798 [2024-11-26 17:30:41.712831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:11.798 [2024-11-26 17:30:41.712859] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:11.798 [2024-11-26 17:30:41.712874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.798 "name": "Existed_Raid", 00:34:11.798 "uuid": "60b5f037-c471-487f-909f-ec4b0f6b54d9", 00:34:11.798 "strip_size_kb": 0, 00:34:11.798 "state": "configuring", 00:34:11.798 "raid_level": "raid1", 00:34:11.798 "superblock": true, 00:34:11.798 "num_base_bdevs": 2, 00:34:11.798 "num_base_bdevs_discovered": 0, 00:34:11.798 "num_base_bdevs_operational": 2, 00:34:11.798 "base_bdevs_list": [ 00:34:11.798 { 00:34:11.798 "name": "BaseBdev1", 00:34:11.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.798 "is_configured": false, 00:34:11.798 "data_offset": 0, 00:34:11.798 "data_size": 0 00:34:11.798 }, 00:34:11.798 { 00:34:11.798 "name": "BaseBdev2", 00:34:11.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.798 "is_configured": false, 00:34:11.798 "data_offset": 0, 00:34:11.798 "data_size": 0 00:34:11.798 } 00:34:11.798 ] 00:34:11.798 }' 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.798 17:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:12.057 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 [2024-11-26 17:30:42.167892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:12.057 [2024-11-26 17:30:42.167946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.315 [2024-11-26 17:30:42.179854] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:12.315 [2024-11-26 17:30:42.179930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:12.315 [2024-11-26 17:30:42.179942] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.315 [2024-11-26 17:30:42.179959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.315 [2024-11-26 17:30:42.236577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:12.315 BaseBdev1 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.315 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.315 [ 00:34:12.315 { 00:34:12.315 "name": "BaseBdev1", 00:34:12.315 "aliases": [ 00:34:12.315 "76e9d1e6-481b-4028-9a4b-2fe25685e43f" 00:34:12.315 ], 00:34:12.315 "product_name": "Malloc disk", 00:34:12.315 "block_size": 4096, 00:34:12.315 "num_blocks": 8192, 00:34:12.315 "uuid": "76e9d1e6-481b-4028-9a4b-2fe25685e43f", 00:34:12.315 "md_size": 32, 00:34:12.315 "md_interleave": false, 00:34:12.315 "dif_type": 0, 00:34:12.315 "assigned_rate_limits": { 00:34:12.315 "rw_ios_per_sec": 0, 00:34:12.315 "rw_mbytes_per_sec": 0, 00:34:12.315 "r_mbytes_per_sec": 0, 00:34:12.315 "w_mbytes_per_sec": 0 00:34:12.315 }, 00:34:12.315 "claimed": true, 00:34:12.315 "claim_type": "exclusive_write", 00:34:12.315 "zoned": false, 00:34:12.316 "supported_io_types": { 00:34:12.316 "read": true, 00:34:12.316 "write": true, 00:34:12.316 "unmap": true, 00:34:12.316 "flush": true, 00:34:12.316 "reset": true, 00:34:12.316 "nvme_admin": false, 00:34:12.316 "nvme_io": false, 00:34:12.316 "nvme_io_md": false, 00:34:12.316 "write_zeroes": true, 00:34:12.316 "zcopy": true, 00:34:12.316 "get_zone_info": false, 00:34:12.316 "zone_management": false, 00:34:12.316 "zone_append": false, 00:34:12.316 "compare": false, 00:34:12.316 "compare_and_write": false, 00:34:12.316 "abort": true, 00:34:12.316 "seek_hole": false, 00:34:12.316 "seek_data": false, 00:34:12.316 "copy": true, 00:34:12.316 "nvme_iov_md": false 00:34:12.316 }, 00:34:12.316 "memory_domains": [ 00:34:12.316 { 00:34:12.316 "dma_device_id": "system", 00:34:12.316 "dma_device_type": 1 00:34:12.316 }, 00:34:12.316 { 00:34:12.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:12.316 "dma_device_type": 2 00:34:12.316 } 00:34:12.316 ], 00:34:12.316 "driver_specific": {} 00:34:12.316 } 00:34:12.316 ] 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.316 "name": "Existed_Raid", 00:34:12.316 "uuid": "dda04886-8d27-4844-8147-fc271efb81b7", 00:34:12.316 "strip_size_kb": 0, 00:34:12.316 "state": "configuring", 00:34:12.316 "raid_level": "raid1", 00:34:12.316 "superblock": true, 00:34:12.316 "num_base_bdevs": 2, 00:34:12.316 "num_base_bdevs_discovered": 1, 00:34:12.316 "num_base_bdevs_operational": 2, 00:34:12.316 "base_bdevs_list": [ 00:34:12.316 { 00:34:12.316 "name": "BaseBdev1", 00:34:12.316 "uuid": "76e9d1e6-481b-4028-9a4b-2fe25685e43f", 00:34:12.316 "is_configured": true, 00:34:12.316 "data_offset": 256, 00:34:12.316 "data_size": 7936 00:34:12.316 }, 00:34:12.316 { 00:34:12.316 "name": "BaseBdev2", 00:34:12.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.316 "is_configured": false, 00:34:12.316 "data_offset": 0, 00:34:12.316 "data_size": 0 00:34:12.316 } 00:34:12.316 ] 00:34:12.316 }' 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.316 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.574 [2024-11-26 17:30:42.612151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:12.574 [2024-11-26 17:30:42.612227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.574 [2024-11-26 17:30:42.624204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:12.574 [2024-11-26 17:30:42.626827] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.574 [2024-11-26 17:30:42.627021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.574 "name": "Existed_Raid", 00:34:12.574 "uuid": "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b", 00:34:12.574 "strip_size_kb": 0, 00:34:12.574 "state": "configuring", 00:34:12.574 "raid_level": "raid1", 00:34:12.574 "superblock": true, 00:34:12.574 "num_base_bdevs": 2, 00:34:12.574 "num_base_bdevs_discovered": 1, 00:34:12.574 "num_base_bdevs_operational": 2, 00:34:12.574 "base_bdevs_list": [ 00:34:12.574 { 00:34:12.574 "name": "BaseBdev1", 00:34:12.574 "uuid": "76e9d1e6-481b-4028-9a4b-2fe25685e43f", 00:34:12.574 "is_configured": true, 00:34:12.574 "data_offset": 256, 00:34:12.574 "data_size": 7936 00:34:12.574 }, 00:34:12.574 { 00:34:12.574 "name": "BaseBdev2", 00:34:12.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.574 "is_configured": false, 00:34:12.574 "data_offset": 0, 00:34:12.574 "data_size": 0 00:34:12.574 } 00:34:12.574 ] 00:34:12.574 }' 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.574 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.832 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:34:12.832 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.832 17:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.089 [2024-11-26 17:30:43.004051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:13.089 [2024-11-26 17:30:43.004380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:13.089 [2024-11-26 17:30:43.004403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:13.089 [2024-11-26 17:30:43.004506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:13.089 [2024-11-26 17:30:43.004682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:13.089 [2024-11-26 17:30:43.004698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:13.089 [2024-11-26 17:30:43.004794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.089 BaseBdev2 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.089 [ 00:34:13.089 { 00:34:13.089 "name": "BaseBdev2", 00:34:13.089 "aliases": [ 00:34:13.089 "15f15d66-05e5-4762-a1e6-9bf48a0325d7" 00:34:13.089 ], 00:34:13.089 "product_name": "Malloc disk", 00:34:13.089 "block_size": 4096, 00:34:13.089 "num_blocks": 8192, 00:34:13.089 "uuid": "15f15d66-05e5-4762-a1e6-9bf48a0325d7", 00:34:13.089 "md_size": 32, 00:34:13.089 "md_interleave": false, 00:34:13.089 "dif_type": 0, 00:34:13.089 "assigned_rate_limits": { 00:34:13.089 "rw_ios_per_sec": 0, 00:34:13.089 "rw_mbytes_per_sec": 0, 00:34:13.089 "r_mbytes_per_sec": 0, 00:34:13.089 "w_mbytes_per_sec": 0 00:34:13.089 }, 00:34:13.089 "claimed": true, 00:34:13.089 "claim_type": "exclusive_write", 00:34:13.089 "zoned": false, 00:34:13.089 "supported_io_types": { 00:34:13.089 "read": true, 00:34:13.089 "write": true, 00:34:13.089 "unmap": true, 00:34:13.089 "flush": true, 00:34:13.089 "reset": true, 00:34:13.089 "nvme_admin": false, 00:34:13.089 "nvme_io": false, 00:34:13.089 "nvme_io_md": false, 00:34:13.089 "write_zeroes": true, 00:34:13.089 "zcopy": true, 00:34:13.089 "get_zone_info": false, 00:34:13.089 "zone_management": false, 00:34:13.089 "zone_append": false, 00:34:13.089 "compare": false, 00:34:13.089 "compare_and_write": false, 00:34:13.089 "abort": true, 00:34:13.089 "seek_hole": false, 00:34:13.089 "seek_data": false, 00:34:13.089 "copy": true, 00:34:13.089 "nvme_iov_md": false 00:34:13.089 }, 00:34:13.089 "memory_domains": [ 00:34:13.089 { 00:34:13.089 "dma_device_id": "system", 00:34:13.089 "dma_device_type": 1 00:34:13.089 }, 00:34:13.089 { 00:34:13.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.089 "dma_device_type": 2 00:34:13.089 } 00:34:13.089 ], 00:34:13.089 "driver_specific": {} 00:34:13.089 } 00:34:13.089 ] 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.089 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.090 "name": "Existed_Raid", 00:34:13.090 "uuid": "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b", 00:34:13.090 "strip_size_kb": 0, 00:34:13.090 "state": "online", 00:34:13.090 "raid_level": "raid1", 00:34:13.090 "superblock": true, 00:34:13.090 "num_base_bdevs": 2, 00:34:13.090 "num_base_bdevs_discovered": 2, 00:34:13.090 "num_base_bdevs_operational": 2, 00:34:13.090 "base_bdevs_list": [ 00:34:13.090 { 00:34:13.090 "name": "BaseBdev1", 00:34:13.090 "uuid": "76e9d1e6-481b-4028-9a4b-2fe25685e43f", 00:34:13.090 "is_configured": true, 00:34:13.090 "data_offset": 256, 00:34:13.090 "data_size": 7936 00:34:13.090 }, 00:34:13.090 { 00:34:13.090 "name": "BaseBdev2", 00:34:13.090 "uuid": "15f15d66-05e5-4762-a1e6-9bf48a0325d7", 00:34:13.090 "is_configured": true, 00:34:13.090 "data_offset": 256, 00:34:13.090 "data_size": 7936 00:34:13.090 } 00:34:13.090 ] 00:34:13.090 }' 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.090 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.347 [2024-11-26 17:30:43.352017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:13.347 "name": "Existed_Raid", 00:34:13.347 "aliases": [ 00:34:13.347 "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b" 00:34:13.347 ], 00:34:13.347 "product_name": "Raid Volume", 00:34:13.347 "block_size": 4096, 00:34:13.347 "num_blocks": 7936, 00:34:13.347 "uuid": "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b", 00:34:13.347 "md_size": 32, 00:34:13.347 "md_interleave": false, 00:34:13.347 "dif_type": 0, 00:34:13.347 "assigned_rate_limits": { 00:34:13.347 "rw_ios_per_sec": 0, 00:34:13.347 "rw_mbytes_per_sec": 0, 00:34:13.347 "r_mbytes_per_sec": 0, 00:34:13.347 "w_mbytes_per_sec": 0 00:34:13.347 }, 00:34:13.347 "claimed": false, 00:34:13.347 "zoned": false, 00:34:13.347 "supported_io_types": { 00:34:13.347 "read": true, 00:34:13.347 "write": true, 00:34:13.347 "unmap": false, 00:34:13.347 "flush": false, 00:34:13.347 "reset": true, 00:34:13.347 "nvme_admin": false, 00:34:13.347 "nvme_io": false, 00:34:13.347 "nvme_io_md": false, 00:34:13.347 "write_zeroes": true, 00:34:13.347 "zcopy": false, 00:34:13.347 "get_zone_info": false, 00:34:13.347 "zone_management": false, 00:34:13.347 "zone_append": false, 00:34:13.347 "compare": false, 00:34:13.347 "compare_and_write": false, 00:34:13.347 "abort": false, 00:34:13.347 "seek_hole": false, 00:34:13.347 "seek_data": false, 00:34:13.347 "copy": false, 00:34:13.347 "nvme_iov_md": false 00:34:13.347 }, 00:34:13.347 "memory_domains": [ 00:34:13.347 { 00:34:13.347 "dma_device_id": "system", 00:34:13.347 "dma_device_type": 1 00:34:13.347 }, 00:34:13.347 { 00:34:13.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.347 "dma_device_type": 2 00:34:13.347 }, 00:34:13.347 { 00:34:13.347 "dma_device_id": "system", 00:34:13.347 "dma_device_type": 1 00:34:13.347 }, 00:34:13.347 { 00:34:13.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.347 "dma_device_type": 2 00:34:13.347 } 00:34:13.347 ], 00:34:13.347 "driver_specific": { 00:34:13.347 "raid": { 00:34:13.347 "uuid": "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b", 00:34:13.347 "strip_size_kb": 0, 00:34:13.347 "state": "online", 00:34:13.347 "raid_level": "raid1", 00:34:13.347 "superblock": true, 00:34:13.347 "num_base_bdevs": 2, 00:34:13.347 "num_base_bdevs_discovered": 2, 00:34:13.347 "num_base_bdevs_operational": 2, 00:34:13.347 "base_bdevs_list": [ 00:34:13.347 { 00:34:13.347 "name": "BaseBdev1", 00:34:13.347 "uuid": "76e9d1e6-481b-4028-9a4b-2fe25685e43f", 00:34:13.347 "is_configured": true, 00:34:13.347 "data_offset": 256, 00:34:13.347 "data_size": 7936 00:34:13.347 }, 00:34:13.347 { 00:34:13.347 "name": "BaseBdev2", 00:34:13.347 "uuid": "15f15d66-05e5-4762-a1e6-9bf48a0325d7", 00:34:13.347 "is_configured": true, 00:34:13.347 "data_offset": 256, 00:34:13.347 "data_size": 7936 00:34:13.347 } 00:34:13.347 ] 00:34:13.347 } 00:34:13.347 } 00:34:13.347 }' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:13.347 BaseBdev2' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.347 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.605 [2024-11-26 17:30:43.511730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.605 "name": "Existed_Raid", 00:34:13.605 "uuid": "dd6b4318-bef8-48d9-9eaf-6d67ec02b37b", 00:34:13.605 "strip_size_kb": 0, 00:34:13.605 "state": "online", 00:34:13.605 "raid_level": "raid1", 00:34:13.605 "superblock": true, 00:34:13.605 "num_base_bdevs": 2, 00:34:13.605 "num_base_bdevs_discovered": 1, 00:34:13.605 "num_base_bdevs_operational": 1, 00:34:13.605 "base_bdevs_list": [ 00:34:13.605 { 00:34:13.605 "name": null, 00:34:13.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.605 "is_configured": false, 00:34:13.605 "data_offset": 0, 00:34:13.605 "data_size": 7936 00:34:13.605 }, 00:34:13.605 { 00:34:13.605 "name": "BaseBdev2", 00:34:13.605 "uuid": "15f15d66-05e5-4762-a1e6-9bf48a0325d7", 00:34:13.605 "is_configured": true, 00:34:13.605 "data_offset": 256, 00:34:13.605 "data_size": 7936 00:34:13.605 } 00:34:13.605 ] 00:34:13.605 }' 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.605 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:13.862 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:14.119 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:14.119 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.119 17:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.119 [2024-11-26 17:30:43.979940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:14.119 [2024-11-26 17:30:43.980077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:14.119 [2024-11-26 17:30:44.096578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.119 [2024-11-26 17:30:44.096643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.119 [2024-11-26 17:30:44.096660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87366 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87366 ']' 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87366 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87366 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.119 killing process with pid 87366 00:34:14.119 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87366' 00:34:14.120 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87366 00:34:14.120 [2024-11-26 17:30:44.158718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:14.120 17:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87366 00:34:14.120 [2024-11-26 17:30:44.176593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:15.499 17:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:34:15.499 00:34:15.499 real 0m4.711s 00:34:15.499 user 0m6.356s 00:34:15.499 sys 0m0.805s 00:34:15.499 17:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.499 17:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.499 ************************************ 00:34:15.499 END TEST raid_state_function_test_sb_md_separate 00:34:15.499 ************************************ 00:34:15.499 17:30:45 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:34:15.499 17:30:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:15.499 17:30:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.499 17:30:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:15.499 ************************************ 00:34:15.499 START TEST raid_superblock_test_md_separate 00:34:15.499 ************************************ 00:34:15.499 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:34:15.499 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:15.499 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:15.499 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87612 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87612 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87612 ']' 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.500 17:30:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.500 [2024-11-26 17:30:45.590291] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:15.500 [2024-11-26 17:30:45.590448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87612 ] 00:34:15.757 [2024-11-26 17:30:45.764698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.014 [2024-11-26 17:30:45.916187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.271 [2024-11-26 17:30:46.159417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.271 [2024-11-26 17:30:46.159479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 malloc1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 [2024-11-26 17:30:46.501299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:16.528 [2024-11-26 17:30:46.501376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.528 [2024-11-26 17:30:46.501405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:16.528 [2024-11-26 17:30:46.501420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.528 [2024-11-26 17:30:46.504030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.528 [2024-11-26 17:30:46.504074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:16.528 pt1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 malloc2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 [2024-11-26 17:30:46.560309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:16.528 [2024-11-26 17:30:46.560387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.528 [2024-11-26 17:30:46.560416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:16.528 [2024-11-26 17:30:46.560428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.528 [2024-11-26 17:30:46.563034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.528 [2024-11-26 17:30:46.563076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:16.528 pt2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 [2024-11-26 17:30:46.568328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:16.528 [2024-11-26 17:30:46.570830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:16.528 [2024-11-26 17:30:46.571047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:16.528 [2024-11-26 17:30:46.571065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:16.528 [2024-11-26 17:30:46.571162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:16.528 [2024-11-26 17:30:46.571291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:16.528 [2024-11-26 17:30:46.571308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:16.528 [2024-11-26 17:30:46.571430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:16.528 "name": "raid_bdev1", 00:34:16.528 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:16.528 "strip_size_kb": 0, 00:34:16.528 "state": "online", 00:34:16.528 "raid_level": "raid1", 00:34:16.528 "superblock": true, 00:34:16.528 "num_base_bdevs": 2, 00:34:16.528 "num_base_bdevs_discovered": 2, 00:34:16.528 "num_base_bdevs_operational": 2, 00:34:16.528 "base_bdevs_list": [ 00:34:16.528 { 00:34:16.528 "name": "pt1", 00:34:16.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:16.529 "is_configured": true, 00:34:16.529 "data_offset": 256, 00:34:16.529 "data_size": 7936 00:34:16.529 }, 00:34:16.529 { 00:34:16.529 "name": "pt2", 00:34:16.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:16.529 "is_configured": true, 00:34:16.529 "data_offset": 256, 00:34:16.529 "data_size": 7936 00:34:16.529 } 00:34:16.529 ] 00:34:16.529 }' 00:34:16.529 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:16.529 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.094 [2024-11-26 17:30:46.936142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.094 "name": "raid_bdev1", 00:34:17.094 "aliases": [ 00:34:17.094 "4734e4c7-0813-47be-8d92-c461c2fb482f" 00:34:17.094 ], 00:34:17.094 "product_name": "Raid Volume", 00:34:17.094 "block_size": 4096, 00:34:17.094 "num_blocks": 7936, 00:34:17.094 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:17.094 "md_size": 32, 00:34:17.094 "md_interleave": false, 00:34:17.094 "dif_type": 0, 00:34:17.094 "assigned_rate_limits": { 00:34:17.094 "rw_ios_per_sec": 0, 00:34:17.094 "rw_mbytes_per_sec": 0, 00:34:17.094 "r_mbytes_per_sec": 0, 00:34:17.094 "w_mbytes_per_sec": 0 00:34:17.094 }, 00:34:17.094 "claimed": false, 00:34:17.094 "zoned": false, 00:34:17.094 "supported_io_types": { 00:34:17.094 "read": true, 00:34:17.094 "write": true, 00:34:17.094 "unmap": false, 00:34:17.094 "flush": false, 00:34:17.094 "reset": true, 00:34:17.094 "nvme_admin": false, 00:34:17.094 "nvme_io": false, 00:34:17.094 "nvme_io_md": false, 00:34:17.094 "write_zeroes": true, 00:34:17.094 "zcopy": false, 00:34:17.094 "get_zone_info": false, 00:34:17.094 "zone_management": false, 00:34:17.094 "zone_append": false, 00:34:17.094 "compare": false, 00:34:17.094 "compare_and_write": false, 00:34:17.094 "abort": false, 00:34:17.094 "seek_hole": false, 00:34:17.094 "seek_data": false, 00:34:17.094 "copy": false, 00:34:17.094 "nvme_iov_md": false 00:34:17.094 }, 00:34:17.094 "memory_domains": [ 00:34:17.094 { 00:34:17.094 "dma_device_id": "system", 00:34:17.094 "dma_device_type": 1 00:34:17.094 }, 00:34:17.094 { 00:34:17.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.094 "dma_device_type": 2 00:34:17.094 }, 00:34:17.094 { 00:34:17.094 "dma_device_id": "system", 00:34:17.094 "dma_device_type": 1 00:34:17.094 }, 00:34:17.094 { 00:34:17.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.094 "dma_device_type": 2 00:34:17.094 } 00:34:17.094 ], 00:34:17.094 "driver_specific": { 00:34:17.094 "raid": { 00:34:17.094 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:17.094 "strip_size_kb": 0, 00:34:17.094 "state": "online", 00:34:17.094 "raid_level": "raid1", 00:34:17.094 "superblock": true, 00:34:17.094 "num_base_bdevs": 2, 00:34:17.094 "num_base_bdevs_discovered": 2, 00:34:17.094 "num_base_bdevs_operational": 2, 00:34:17.094 "base_bdevs_list": [ 00:34:17.094 { 00:34:17.094 "name": "pt1", 00:34:17.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:17.094 "is_configured": true, 00:34:17.094 "data_offset": 256, 00:34:17.094 "data_size": 7936 00:34:17.094 }, 00:34:17.094 { 00:34:17.094 "name": "pt2", 00:34:17.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:17.094 "is_configured": true, 00:34:17.094 "data_offset": 256, 00:34:17.094 "data_size": 7936 00:34:17.094 } 00:34:17.094 ] 00:34:17.094 } 00:34:17.094 } 00:34:17.094 }' 00:34:17.094 17:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:17.094 pt2' 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:17.094 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:17.095 [2024-11-26 17:30:47.112016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4734e4c7-0813-47be-8d92-c461c2fb482f 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 4734e4c7-0813-47be-8d92-c461c2fb482f ']' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 [2024-11-26 17:30:47.147687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:17.095 [2024-11-26 17:30:47.147727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:17.095 [2024-11-26 17:30:47.147850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:17.095 [2024-11-26 17:30:47.147921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:17.095 [2024-11-26 17:30:47.147938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.353 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:17.353 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:17.353 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.354 [2024-11-26 17:30:47.251597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:17.354 [2024-11-26 17:30:47.253959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:17.354 [2024-11-26 17:30:47.254056] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:17.354 [2024-11-26 17:30:47.254124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:17.354 [2024-11-26 17:30:47.254143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:17.354 [2024-11-26 17:30:47.254156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:17.354 request: 00:34:17.354 { 00:34:17.354 "name": "raid_bdev1", 00:34:17.354 "raid_level": "raid1", 00:34:17.354 "base_bdevs": [ 00:34:17.354 "malloc1", 00:34:17.354 "malloc2" 00:34:17.354 ], 00:34:17.354 "superblock": false, 00:34:17.354 "method": "bdev_raid_create", 00:34:17.354 "req_id": 1 00:34:17.354 } 00:34:17.354 Got JSON-RPC error response 00:34:17.354 response: 00:34:17.354 { 00:34:17.354 "code": -17, 00:34:17.354 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:17.354 } 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.354 [2024-11-26 17:30:47.315424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:17.354 [2024-11-26 17:30:47.315494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.354 [2024-11-26 17:30:47.315528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:17.354 [2024-11-26 17:30:47.315546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.354 [2024-11-26 17:30:47.317985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.354 [2024-11-26 17:30:47.318029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:17.354 [2024-11-26 17:30:47.318108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:17.354 [2024-11-26 17:30:47.318181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:17.354 pt1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.354 "name": "raid_bdev1", 00:34:17.354 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:17.354 "strip_size_kb": 0, 00:34:17.354 "state": "configuring", 00:34:17.354 "raid_level": "raid1", 00:34:17.354 "superblock": true, 00:34:17.354 "num_base_bdevs": 2, 00:34:17.354 "num_base_bdevs_discovered": 1, 00:34:17.354 "num_base_bdevs_operational": 2, 00:34:17.354 "base_bdevs_list": [ 00:34:17.354 { 00:34:17.354 "name": "pt1", 00:34:17.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:17.354 "is_configured": true, 00:34:17.354 "data_offset": 256, 00:34:17.354 "data_size": 7936 00:34:17.354 }, 00:34:17.354 { 00:34:17.354 "name": null, 00:34:17.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:17.354 "is_configured": false, 00:34:17.354 "data_offset": 256, 00:34:17.354 "data_size": 7936 00:34:17.354 } 00:34:17.354 ] 00:34:17.354 }' 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.354 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.613 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:17.613 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.614 [2024-11-26 17:30:47.718862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:17.614 [2024-11-26 17:30:47.718963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.614 [2024-11-26 17:30:47.718989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:17.614 [2024-11-26 17:30:47.719005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.614 [2024-11-26 17:30:47.719264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.614 [2024-11-26 17:30:47.719287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:17.614 [2024-11-26 17:30:47.719347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:17.614 [2024-11-26 17:30:47.719374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:17.614 [2024-11-26 17:30:47.719499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:17.614 [2024-11-26 17:30:47.719513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:17.614 [2024-11-26 17:30:47.719613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:17.614 [2024-11-26 17:30:47.719725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:17.614 [2024-11-26 17:30:47.719734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:17.614 [2024-11-26 17:30:47.719843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.614 pt2 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.614 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.873 "name": "raid_bdev1", 00:34:17.873 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:17.873 "strip_size_kb": 0, 00:34:17.873 "state": "online", 00:34:17.873 "raid_level": "raid1", 00:34:17.873 "superblock": true, 00:34:17.873 "num_base_bdevs": 2, 00:34:17.873 "num_base_bdevs_discovered": 2, 00:34:17.873 "num_base_bdevs_operational": 2, 00:34:17.873 "base_bdevs_list": [ 00:34:17.873 { 00:34:17.873 "name": "pt1", 00:34:17.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:17.873 "is_configured": true, 00:34:17.873 "data_offset": 256, 00:34:17.873 "data_size": 7936 00:34:17.873 }, 00:34:17.873 { 00:34:17.873 "name": "pt2", 00:34:17.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:17.873 "is_configured": true, 00:34:17.873 "data_offset": 256, 00:34:17.873 "data_size": 7936 00:34:17.873 } 00:34:17.873 ] 00:34:17.873 }' 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.873 17:30:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.132 [2024-11-26 17:30:48.166573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.132 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:18.132 "name": "raid_bdev1", 00:34:18.132 "aliases": [ 00:34:18.132 "4734e4c7-0813-47be-8d92-c461c2fb482f" 00:34:18.132 ], 00:34:18.132 "product_name": "Raid Volume", 00:34:18.132 "block_size": 4096, 00:34:18.132 "num_blocks": 7936, 00:34:18.132 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:18.132 "md_size": 32, 00:34:18.132 "md_interleave": false, 00:34:18.132 "dif_type": 0, 00:34:18.132 "assigned_rate_limits": { 00:34:18.132 "rw_ios_per_sec": 0, 00:34:18.132 "rw_mbytes_per_sec": 0, 00:34:18.132 "r_mbytes_per_sec": 0, 00:34:18.132 "w_mbytes_per_sec": 0 00:34:18.132 }, 00:34:18.132 "claimed": false, 00:34:18.132 "zoned": false, 00:34:18.132 "supported_io_types": { 00:34:18.132 "read": true, 00:34:18.132 "write": true, 00:34:18.132 "unmap": false, 00:34:18.132 "flush": false, 00:34:18.132 "reset": true, 00:34:18.132 "nvme_admin": false, 00:34:18.132 "nvme_io": false, 00:34:18.132 "nvme_io_md": false, 00:34:18.132 "write_zeroes": true, 00:34:18.132 "zcopy": false, 00:34:18.132 "get_zone_info": false, 00:34:18.132 "zone_management": false, 00:34:18.132 "zone_append": false, 00:34:18.132 "compare": false, 00:34:18.132 "compare_and_write": false, 00:34:18.132 "abort": false, 00:34:18.132 "seek_hole": false, 00:34:18.132 "seek_data": false, 00:34:18.132 "copy": false, 00:34:18.132 "nvme_iov_md": false 00:34:18.132 }, 00:34:18.132 "memory_domains": [ 00:34:18.132 { 00:34:18.132 "dma_device_id": "system", 00:34:18.132 "dma_device_type": 1 00:34:18.132 }, 00:34:18.132 { 00:34:18.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.132 "dma_device_type": 2 00:34:18.132 }, 00:34:18.132 { 00:34:18.132 "dma_device_id": "system", 00:34:18.132 "dma_device_type": 1 00:34:18.132 }, 00:34:18.132 { 00:34:18.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.132 "dma_device_type": 2 00:34:18.132 } 00:34:18.132 ], 00:34:18.132 "driver_specific": { 00:34:18.132 "raid": { 00:34:18.132 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:18.132 "strip_size_kb": 0, 00:34:18.132 "state": "online", 00:34:18.132 "raid_level": "raid1", 00:34:18.132 "superblock": true, 00:34:18.132 "num_base_bdevs": 2, 00:34:18.132 "num_base_bdevs_discovered": 2, 00:34:18.132 "num_base_bdevs_operational": 2, 00:34:18.132 "base_bdevs_list": [ 00:34:18.132 { 00:34:18.132 "name": "pt1", 00:34:18.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:18.133 "is_configured": true, 00:34:18.133 "data_offset": 256, 00:34:18.133 "data_size": 7936 00:34:18.133 }, 00:34:18.133 { 00:34:18.133 "name": "pt2", 00:34:18.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:18.133 "is_configured": true, 00:34:18.133 "data_offset": 256, 00:34:18.133 "data_size": 7936 00:34:18.133 } 00:34:18.133 ] 00:34:18.133 } 00:34:18.133 } 00:34:18.133 }' 00:34:18.133 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:18.392 pt2' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.392 [2024-11-26 17:30:48.390251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 4734e4c7-0813-47be-8d92-c461c2fb482f '!=' 4734e4c7-0813-47be-8d92-c461c2fb482f ']' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.392 [2024-11-26 17:30:48.429940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.392 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.392 "name": "raid_bdev1", 00:34:18.392 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:18.392 "strip_size_kb": 0, 00:34:18.392 "state": "online", 00:34:18.392 "raid_level": "raid1", 00:34:18.392 "superblock": true, 00:34:18.392 "num_base_bdevs": 2, 00:34:18.392 "num_base_bdevs_discovered": 1, 00:34:18.392 "num_base_bdevs_operational": 1, 00:34:18.392 "base_bdevs_list": [ 00:34:18.392 { 00:34:18.392 "name": null, 00:34:18.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.392 "is_configured": false, 00:34:18.392 "data_offset": 0, 00:34:18.392 "data_size": 7936 00:34:18.392 }, 00:34:18.392 { 00:34:18.392 "name": "pt2", 00:34:18.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:18.393 "is_configured": true, 00:34:18.393 "data_offset": 256, 00:34:18.393 "data_size": 7936 00:34:18.393 } 00:34:18.393 ] 00:34:18.393 }' 00:34:18.393 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.393 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 [2024-11-26 17:30:48.881263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:18.960 [2024-11-26 17:30:48.881303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:18.960 [2024-11-26 17:30:48.881401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:18.960 [2024-11-26 17:30:48.881457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:18.960 [2024-11-26 17:30:48.881472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 [2024-11-26 17:30:48.953138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:18.960 [2024-11-26 17:30:48.953211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.960 [2024-11-26 17:30:48.953232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:18.960 [2024-11-26 17:30:48.953247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.960 [2024-11-26 17:30:48.955859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.960 [2024-11-26 17:30:48.956049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:18.960 [2024-11-26 17:30:48.956135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:18.960 [2024-11-26 17:30:48.956199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:18.960 [2024-11-26 17:30:48.956316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:18.960 [2024-11-26 17:30:48.956333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:18.960 [2024-11-26 17:30:48.956419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:18.960 [2024-11-26 17:30:48.956589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:18.960 [2024-11-26 17:30:48.956600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:18.960 [2024-11-26 17:30:48.956717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:18.960 pt2 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.960 17:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.960 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.960 "name": "raid_bdev1", 00:34:18.960 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:18.960 "strip_size_kb": 0, 00:34:18.960 "state": "online", 00:34:18.960 "raid_level": "raid1", 00:34:18.960 "superblock": true, 00:34:18.960 "num_base_bdevs": 2, 00:34:18.960 "num_base_bdevs_discovered": 1, 00:34:18.960 "num_base_bdevs_operational": 1, 00:34:18.960 "base_bdevs_list": [ 00:34:18.960 { 00:34:18.960 "name": null, 00:34:18.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.960 "is_configured": false, 00:34:18.960 "data_offset": 256, 00:34:18.960 "data_size": 7936 00:34:18.960 }, 00:34:18.960 { 00:34:18.960 "name": "pt2", 00:34:18.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:18.960 "is_configured": true, 00:34:18.960 "data_offset": 256, 00:34:18.960 "data_size": 7936 00:34:18.960 } 00:34:18.960 ] 00:34:18.961 }' 00:34:18.961 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.961 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.525 [2024-11-26 17:30:49.424491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:19.525 [2024-11-26 17:30:49.424563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:19.525 [2024-11-26 17:30:49.424659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:19.525 [2024-11-26 17:30:49.424721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:19.525 [2024-11-26 17:30:49.424734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.525 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.525 [2024-11-26 17:30:49.484443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:19.525 [2024-11-26 17:30:49.484665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.525 [2024-11-26 17:30:49.484703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:19.525 [2024-11-26 17:30:49.484716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.525 [2024-11-26 17:30:49.487302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.526 [2024-11-26 17:30:49.487472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:19.526 [2024-11-26 17:30:49.487597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:19.526 [2024-11-26 17:30:49.487667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:19.526 [2024-11-26 17:30:49.487823] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:19.526 [2024-11-26 17:30:49.487838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:19.526 [2024-11-26 17:30:49.487862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:34:19.526 [2024-11-26 17:30:49.487957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:19.526 [2024-11-26 17:30:49.488037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:34:19.526 [2024-11-26 17:30:49.488048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:19.526 [2024-11-26 17:30:49.488150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:19.526 [2024-11-26 17:30:49.488287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:34:19.526 [2024-11-26 17:30:49.488302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:34:19.526 [2024-11-26 17:30:49.488484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.526 pt1 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.526 "name": "raid_bdev1", 00:34:19.526 "uuid": "4734e4c7-0813-47be-8d92-c461c2fb482f", 00:34:19.526 "strip_size_kb": 0, 00:34:19.526 "state": "online", 00:34:19.526 "raid_level": "raid1", 00:34:19.526 "superblock": true, 00:34:19.526 "num_base_bdevs": 2, 00:34:19.526 "num_base_bdevs_discovered": 1, 00:34:19.526 "num_base_bdevs_operational": 1, 00:34:19.526 "base_bdevs_list": [ 00:34:19.526 { 00:34:19.526 "name": null, 00:34:19.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.526 "is_configured": false, 00:34:19.526 "data_offset": 256, 00:34:19.526 "data_size": 7936 00:34:19.526 }, 00:34:19.526 { 00:34:19.526 "name": "pt2", 00:34:19.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:19.526 "is_configured": true, 00:34:19.526 "data_offset": 256, 00:34:19.526 "data_size": 7936 00:34:19.526 } 00:34:19.526 ] 00:34:19.526 }' 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.526 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.094 [2024-11-26 17:30:49.960117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 4734e4c7-0813-47be-8d92-c461c2fb482f '!=' 4734e4c7-0813-47be-8d92-c461c2fb482f ']' 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87612 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87612 ']' 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87612 00:34:20.094 17:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87612 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.094 killing process with pid 87612 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87612' 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87612 00:34:20.094 [2024-11-26 17:30:50.042164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:20.094 17:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87612 00:34:20.094 [2024-11-26 17:30:50.042288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.094 [2024-11-26 17:30:50.042347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.094 [2024-11-26 17:30:50.042373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:34:20.353 [2024-11-26 17:30:50.290081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:21.781 17:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:34:21.781 00:34:21.781 real 0m6.069s 00:34:21.781 user 0m8.895s 00:34:21.781 sys 0m1.265s 00:34:21.781 17:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.781 ************************************ 00:34:21.781 END TEST raid_superblock_test_md_separate 00:34:21.781 ************************************ 00:34:21.781 17:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 17:30:51 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:34:21.781 17:30:51 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:34:21.781 17:30:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:21.781 17:30:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.781 17:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 ************************************ 00:34:21.781 START TEST raid_rebuild_test_sb_md_separate 00:34:21.781 ************************************ 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87939 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87939 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87939 ']' 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:21.781 17:30:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:21.781 Zero copy mechanism will not be used. 00:34:21.781 [2024-11-26 17:30:51.758579] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:21.781 [2024-11-26 17:30:51.758727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87939 ] 00:34:22.041 [2024-11-26 17:30:51.932099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.041 [2024-11-26 17:30:52.042290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.299 [2024-11-26 17:30:52.270725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:22.299 [2024-11-26 17:30:52.270775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.558 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 BaseBdev1_malloc 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 [2024-11-26 17:30:52.678613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:22.817 [2024-11-26 17:30:52.678679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.817 [2024-11-26 17:30:52.678704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:22.817 [2024-11-26 17:30:52.678721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.817 [2024-11-26 17:30:52.681103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.817 [2024-11-26 17:30:52.681307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:22.817 BaseBdev1 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 BaseBdev2_malloc 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 [2024-11-26 17:30:52.742295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:22.817 [2024-11-26 17:30:52.742512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.817 [2024-11-26 17:30:52.742561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:22.817 [2024-11-26 17:30:52.742582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.817 [2024-11-26 17:30:52.745308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.817 [2024-11-26 17:30:52.745352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:22.817 BaseBdev2 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 spare_malloc 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 spare_delay 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 [2024-11-26 17:30:52.830491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:22.817 [2024-11-26 17:30:52.830577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.817 [2024-11-26 17:30:52.830602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:22.817 [2024-11-26 17:30:52.830618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.817 [2024-11-26 17:30:52.833018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.817 [2024-11-26 17:30:52.833071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:22.817 spare 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 [2024-11-26 17:30:52.842525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:22.817 [2024-11-26 17:30:52.844834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.817 [2024-11-26 17:30:52.845035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:22.817 [2024-11-26 17:30:52.845053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:22.817 [2024-11-26 17:30:52.845131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:22.817 [2024-11-26 17:30:52.845277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:22.817 [2024-11-26 17:30:52.845289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:22.817 [2024-11-26 17:30:52.845412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.817 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.817 "name": "raid_bdev1", 00:34:22.817 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:22.817 "strip_size_kb": 0, 00:34:22.817 "state": "online", 00:34:22.817 "raid_level": "raid1", 00:34:22.817 "superblock": true, 00:34:22.817 "num_base_bdevs": 2, 00:34:22.817 "num_base_bdevs_discovered": 2, 00:34:22.817 "num_base_bdevs_operational": 2, 00:34:22.817 "base_bdevs_list": [ 00:34:22.817 { 00:34:22.817 "name": "BaseBdev1", 00:34:22.817 "uuid": "22d61cff-0400-504b-8d3e-2a3c674c1734", 00:34:22.817 "is_configured": true, 00:34:22.817 "data_offset": 256, 00:34:22.817 "data_size": 7936 00:34:22.817 }, 00:34:22.817 { 00:34:22.817 "name": "BaseBdev2", 00:34:22.817 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:22.817 "is_configured": true, 00:34:22.817 "data_offset": 256, 00:34:22.817 "data_size": 7936 00:34:22.817 } 00:34:22.817 ] 00:34:22.817 }' 00:34:22.818 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.818 17:30:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.385 [2024-11-26 17:30:53.302171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.385 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.386 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:23.645 [2024-11-26 17:30:53.605904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:23.645 /dev/nbd0 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.645 1+0 records in 00:34:23.645 1+0 records out 00:34:23.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391102 s, 10.5 MB/s 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:23.645 17:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:34:24.584 7936+0 records in 00:34:24.584 7936+0 records out 00:34:24.584 32505856 bytes (33 MB, 31 MiB) copied, 0.756219 s, 43.0 MB/s 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:24.584 [2024-11-26 17:30:54.666804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.584 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:24.584 [2024-11-26 17:30:54.690924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:24.844 "name": "raid_bdev1", 00:34:24.844 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:24.844 "strip_size_kb": 0, 00:34:24.844 "state": "online", 00:34:24.844 "raid_level": "raid1", 00:34:24.844 "superblock": true, 00:34:24.844 "num_base_bdevs": 2, 00:34:24.844 "num_base_bdevs_discovered": 1, 00:34:24.844 "num_base_bdevs_operational": 1, 00:34:24.844 "base_bdevs_list": [ 00:34:24.844 { 00:34:24.844 "name": null, 00:34:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.844 "is_configured": false, 00:34:24.844 "data_offset": 0, 00:34:24.844 "data_size": 7936 00:34:24.844 }, 00:34:24.844 { 00:34:24.844 "name": "BaseBdev2", 00:34:24.844 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:24.844 "is_configured": true, 00:34:24.844 "data_offset": 256, 00:34:24.844 "data_size": 7936 00:34:24.844 } 00:34:24.844 ] 00:34:24.844 }' 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:24.844 17:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.103 17:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:25.103 17:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.103 17:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.103 [2024-11-26 17:30:55.154274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:25.103 [2024-11-26 17:30:55.169239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:34:25.103 17:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.103 17:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:25.103 [2024-11-26 17:30:55.171585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.482 "name": "raid_bdev1", 00:34:26.482 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:26.482 "strip_size_kb": 0, 00:34:26.482 "state": "online", 00:34:26.482 "raid_level": "raid1", 00:34:26.482 "superblock": true, 00:34:26.482 "num_base_bdevs": 2, 00:34:26.482 "num_base_bdevs_discovered": 2, 00:34:26.482 "num_base_bdevs_operational": 2, 00:34:26.482 "process": { 00:34:26.482 "type": "rebuild", 00:34:26.482 "target": "spare", 00:34:26.482 "progress": { 00:34:26.482 "blocks": 2560, 00:34:26.482 "percent": 32 00:34:26.482 } 00:34:26.482 }, 00:34:26.482 "base_bdevs_list": [ 00:34:26.482 { 00:34:26.482 "name": "spare", 00:34:26.482 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:26.482 "is_configured": true, 00:34:26.482 "data_offset": 256, 00:34:26.482 "data_size": 7936 00:34:26.482 }, 00:34:26.482 { 00:34:26.482 "name": "BaseBdev2", 00:34:26.482 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:26.482 "is_configured": true, 00:34:26.482 "data_offset": 256, 00:34:26.482 "data_size": 7936 00:34:26.482 } 00:34:26.482 ] 00:34:26.482 }' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.482 [2024-11-26 17:30:56.307837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.482 [2024-11-26 17:30:56.379750] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:26.482 [2024-11-26 17:30:56.379826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:26.482 [2024-11-26 17:30:56.379844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.482 [2024-11-26 17:30:56.379860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:26.482 "name": "raid_bdev1", 00:34:26.482 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:26.482 "strip_size_kb": 0, 00:34:26.482 "state": "online", 00:34:26.482 "raid_level": "raid1", 00:34:26.482 "superblock": true, 00:34:26.482 "num_base_bdevs": 2, 00:34:26.482 "num_base_bdevs_discovered": 1, 00:34:26.482 "num_base_bdevs_operational": 1, 00:34:26.482 "base_bdevs_list": [ 00:34:26.482 { 00:34:26.482 "name": null, 00:34:26.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.482 "is_configured": false, 00:34:26.482 "data_offset": 0, 00:34:26.482 "data_size": 7936 00:34:26.482 }, 00:34:26.482 { 00:34:26.482 "name": "BaseBdev2", 00:34:26.482 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:26.482 "is_configured": true, 00:34:26.482 "data_offset": 256, 00:34:26.482 "data_size": 7936 00:34:26.482 } 00:34:26.482 ] 00:34:26.482 }' 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:26.482 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.742 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.001 "name": "raid_bdev1", 00:34:27.001 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:27.001 "strip_size_kb": 0, 00:34:27.001 "state": "online", 00:34:27.001 "raid_level": "raid1", 00:34:27.001 "superblock": true, 00:34:27.001 "num_base_bdevs": 2, 00:34:27.001 "num_base_bdevs_discovered": 1, 00:34:27.001 "num_base_bdevs_operational": 1, 00:34:27.001 "base_bdevs_list": [ 00:34:27.001 { 00:34:27.001 "name": null, 00:34:27.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.001 "is_configured": false, 00:34:27.001 "data_offset": 0, 00:34:27.001 "data_size": 7936 00:34:27.001 }, 00:34:27.001 { 00:34:27.001 "name": "BaseBdev2", 00:34:27.001 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:27.001 "is_configured": true, 00:34:27.001 "data_offset": 256, 00:34:27.001 "data_size": 7936 00:34:27.001 } 00:34:27.001 ] 00:34:27.001 }' 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:27.001 [2024-11-26 17:30:56.973721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:27.001 [2024-11-26 17:30:56.987439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.001 17:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:27.001 [2024-11-26 17:30:56.989827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.939 17:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:27.939 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.939 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.939 "name": "raid_bdev1", 00:34:27.939 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:27.939 "strip_size_kb": 0, 00:34:27.939 "state": "online", 00:34:27.939 "raid_level": "raid1", 00:34:27.939 "superblock": true, 00:34:27.939 "num_base_bdevs": 2, 00:34:27.939 "num_base_bdevs_discovered": 2, 00:34:27.939 "num_base_bdevs_operational": 2, 00:34:27.939 "process": { 00:34:27.939 "type": "rebuild", 00:34:27.939 "target": "spare", 00:34:27.939 "progress": { 00:34:27.939 "blocks": 2560, 00:34:27.939 "percent": 32 00:34:27.939 } 00:34:27.939 }, 00:34:27.939 "base_bdevs_list": [ 00:34:27.939 { 00:34:27.939 "name": "spare", 00:34:27.939 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:27.939 "is_configured": true, 00:34:27.939 "data_offset": 256, 00:34:27.939 "data_size": 7936 00:34:27.939 }, 00:34:27.939 { 00:34:27.939 "name": "BaseBdev2", 00:34:27.939 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:27.939 "is_configured": true, 00:34:27.939 "data_offset": 256, 00:34:27.939 "data_size": 7936 00:34:27.939 } 00:34:27.939 ] 00:34:27.939 }' 00:34:27.939 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:28.198 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=724 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.198 "name": "raid_bdev1", 00:34:28.198 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:28.198 "strip_size_kb": 0, 00:34:28.198 "state": "online", 00:34:28.198 "raid_level": "raid1", 00:34:28.198 "superblock": true, 00:34:28.198 "num_base_bdevs": 2, 00:34:28.198 "num_base_bdevs_discovered": 2, 00:34:28.198 "num_base_bdevs_operational": 2, 00:34:28.198 "process": { 00:34:28.198 "type": "rebuild", 00:34:28.198 "target": "spare", 00:34:28.198 "progress": { 00:34:28.198 "blocks": 2816, 00:34:28.198 "percent": 35 00:34:28.198 } 00:34:28.198 }, 00:34:28.198 "base_bdevs_list": [ 00:34:28.198 { 00:34:28.198 "name": "spare", 00:34:28.198 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:28.198 "is_configured": true, 00:34:28.198 "data_offset": 256, 00:34:28.198 "data_size": 7936 00:34:28.198 }, 00:34:28.198 { 00:34:28.198 "name": "BaseBdev2", 00:34:28.198 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:28.198 "is_configured": true, 00:34:28.198 "data_offset": 256, 00:34:28.198 "data_size": 7936 00:34:28.198 } 00:34:28.198 ] 00:34:28.198 }' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.198 17:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.162 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:29.421 "name": "raid_bdev1", 00:34:29.421 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:29.421 "strip_size_kb": 0, 00:34:29.421 "state": "online", 00:34:29.421 "raid_level": "raid1", 00:34:29.421 "superblock": true, 00:34:29.421 "num_base_bdevs": 2, 00:34:29.421 "num_base_bdevs_discovered": 2, 00:34:29.421 "num_base_bdevs_operational": 2, 00:34:29.421 "process": { 00:34:29.421 "type": "rebuild", 00:34:29.421 "target": "spare", 00:34:29.421 "progress": { 00:34:29.421 "blocks": 5632, 00:34:29.421 "percent": 70 00:34:29.421 } 00:34:29.421 }, 00:34:29.421 "base_bdevs_list": [ 00:34:29.421 { 00:34:29.421 "name": "spare", 00:34:29.421 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:29.421 "is_configured": true, 00:34:29.421 "data_offset": 256, 00:34:29.421 "data_size": 7936 00:34:29.421 }, 00:34:29.421 { 00:34:29.421 "name": "BaseBdev2", 00:34:29.421 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:29.421 "is_configured": true, 00:34:29.421 "data_offset": 256, 00:34:29.421 "data_size": 7936 00:34:29.421 } 00:34:29.421 ] 00:34:29.421 }' 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:29.421 17:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:30.357 [2024-11-26 17:31:00.110550] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:30.357 [2024-11-26 17:31:00.110652] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:30.357 [2024-11-26 17:31:00.110776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.357 "name": "raid_bdev1", 00:34:30.357 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:30.357 "strip_size_kb": 0, 00:34:30.357 "state": "online", 00:34:30.357 "raid_level": "raid1", 00:34:30.357 "superblock": true, 00:34:30.357 "num_base_bdevs": 2, 00:34:30.357 "num_base_bdevs_discovered": 2, 00:34:30.357 "num_base_bdevs_operational": 2, 00:34:30.357 "base_bdevs_list": [ 00:34:30.357 { 00:34:30.357 "name": "spare", 00:34:30.357 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:30.357 "is_configured": true, 00:34:30.357 "data_offset": 256, 00:34:30.357 "data_size": 7936 00:34:30.357 }, 00:34:30.357 { 00:34:30.357 "name": "BaseBdev2", 00:34:30.357 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:30.357 "is_configured": true, 00:34:30.357 "data_offset": 256, 00:34:30.357 "data_size": 7936 00:34:30.357 } 00:34:30.357 ] 00:34:30.357 }' 00:34:30.357 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.616 "name": "raid_bdev1", 00:34:30.616 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:30.616 "strip_size_kb": 0, 00:34:30.616 "state": "online", 00:34:30.616 "raid_level": "raid1", 00:34:30.616 "superblock": true, 00:34:30.616 "num_base_bdevs": 2, 00:34:30.616 "num_base_bdevs_discovered": 2, 00:34:30.616 "num_base_bdevs_operational": 2, 00:34:30.616 "base_bdevs_list": [ 00:34:30.616 { 00:34:30.616 "name": "spare", 00:34:30.616 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:30.616 "is_configured": true, 00:34:30.616 "data_offset": 256, 00:34:30.616 "data_size": 7936 00:34:30.616 }, 00:34:30.616 { 00:34:30.616 "name": "BaseBdev2", 00:34:30.616 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:30.616 "is_configured": true, 00:34:30.616 "data_offset": 256, 00:34:30.616 "data_size": 7936 00:34:30.616 } 00:34:30.616 ] 00:34:30.616 }' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.616 "name": "raid_bdev1", 00:34:30.616 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:30.616 "strip_size_kb": 0, 00:34:30.616 "state": "online", 00:34:30.616 "raid_level": "raid1", 00:34:30.616 "superblock": true, 00:34:30.616 "num_base_bdevs": 2, 00:34:30.616 "num_base_bdevs_discovered": 2, 00:34:30.616 "num_base_bdevs_operational": 2, 00:34:30.616 "base_bdevs_list": [ 00:34:30.616 { 00:34:30.616 "name": "spare", 00:34:30.616 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:30.616 "is_configured": true, 00:34:30.616 "data_offset": 256, 00:34:30.616 "data_size": 7936 00:34:30.616 }, 00:34:30.616 { 00:34:30.616 "name": "BaseBdev2", 00:34:30.616 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:30.616 "is_configured": true, 00:34:30.616 "data_offset": 256, 00:34:30.616 "data_size": 7936 00:34:30.616 } 00:34:30.616 ] 00:34:30.616 }' 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.616 17:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.181 [2024-11-26 17:31:01.032141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:31.181 [2024-11-26 17:31:01.032183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:31.181 [2024-11-26 17:31:01.032288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:31.181 [2024-11-26 17:31:01.032363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:31.181 [2024-11-26 17:31:01.032375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:31.181 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:31.437 /dev/nbd0 00:34:31.437 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.438 1+0 records in 00:34:31.438 1+0 records out 00:34:31.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391057 s, 10.5 MB/s 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:31.438 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:31.694 /dev/nbd1 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.694 1+0 records in 00:34:31.694 1+0 records out 00:34:31.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319334 s, 12.8 MB/s 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:31.694 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.952 17:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.952 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.209 [2024-11-26 17:31:02.310491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:32.209 [2024-11-26 17:31:02.310565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:32.209 [2024-11-26 17:31:02.310591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:32.209 [2024-11-26 17:31:02.310603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:32.209 [2024-11-26 17:31:02.313038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:32.209 [2024-11-26 17:31:02.313078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:32.209 [2024-11-26 17:31:02.313144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:32.209 [2024-11-26 17:31:02.313205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:32.209 [2024-11-26 17:31:02.313344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:32.209 spare 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.209 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.468 [2024-11-26 17:31:02.413283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:32.468 [2024-11-26 17:31:02.413323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:32.468 [2024-11-26 17:31:02.413444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:34:32.468 [2024-11-26 17:31:02.413625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:32.468 [2024-11-26 17:31:02.413637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:32.468 [2024-11-26 17:31:02.413797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.468 "name": "raid_bdev1", 00:34:32.468 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:32.468 "strip_size_kb": 0, 00:34:32.468 "state": "online", 00:34:32.468 "raid_level": "raid1", 00:34:32.468 "superblock": true, 00:34:32.468 "num_base_bdevs": 2, 00:34:32.468 "num_base_bdevs_discovered": 2, 00:34:32.468 "num_base_bdevs_operational": 2, 00:34:32.468 "base_bdevs_list": [ 00:34:32.468 { 00:34:32.468 "name": "spare", 00:34:32.468 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:32.468 "is_configured": true, 00:34:32.468 "data_offset": 256, 00:34:32.468 "data_size": 7936 00:34:32.468 }, 00:34:32.468 { 00:34:32.468 "name": "BaseBdev2", 00:34:32.468 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:32.468 "is_configured": true, 00:34:32.468 "data_offset": 256, 00:34:32.468 "data_size": 7936 00:34:32.468 } 00:34:32.468 ] 00:34:32.468 }' 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.468 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.725 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:32.725 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:32.725 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:32.725 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:32.725 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.983 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:32.983 "name": "raid_bdev1", 00:34:32.983 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:32.983 "strip_size_kb": 0, 00:34:32.983 "state": "online", 00:34:32.983 "raid_level": "raid1", 00:34:32.983 "superblock": true, 00:34:32.983 "num_base_bdevs": 2, 00:34:32.983 "num_base_bdevs_discovered": 2, 00:34:32.983 "num_base_bdevs_operational": 2, 00:34:32.983 "base_bdevs_list": [ 00:34:32.983 { 00:34:32.983 "name": "spare", 00:34:32.983 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:32.983 "is_configured": true, 00:34:32.983 "data_offset": 256, 00:34:32.983 "data_size": 7936 00:34:32.983 }, 00:34:32.983 { 00:34:32.983 "name": "BaseBdev2", 00:34:32.983 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:32.983 "is_configured": true, 00:34:32.983 "data_offset": 256, 00:34:32.983 "data_size": 7936 00:34:32.983 } 00:34:32.984 ] 00:34:32.984 }' 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:32.984 17:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.984 [2024-11-26 17:31:03.009844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.984 "name": "raid_bdev1", 00:34:32.984 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:32.984 "strip_size_kb": 0, 00:34:32.984 "state": "online", 00:34:32.984 "raid_level": "raid1", 00:34:32.984 "superblock": true, 00:34:32.984 "num_base_bdevs": 2, 00:34:32.984 "num_base_bdevs_discovered": 1, 00:34:32.984 "num_base_bdevs_operational": 1, 00:34:32.984 "base_bdevs_list": [ 00:34:32.984 { 00:34:32.984 "name": null, 00:34:32.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.984 "is_configured": false, 00:34:32.984 "data_offset": 0, 00:34:32.984 "data_size": 7936 00:34:32.984 }, 00:34:32.984 { 00:34:32.984 "name": "BaseBdev2", 00:34:32.984 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:32.984 "is_configured": true, 00:34:32.984 "data_offset": 256, 00:34:32.984 "data_size": 7936 00:34:32.984 } 00:34:32.984 ] 00:34:32.984 }' 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.984 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.601 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:33.601 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.601 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.601 [2024-11-26 17:31:03.377876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.601 [2024-11-26 17:31:03.378107] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:33.601 [2024-11-26 17:31:03.378134] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:33.601 [2024-11-26 17:31:03.378178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.601 [2024-11-26 17:31:03.391713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:34:33.601 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.601 17:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:33.601 [2024-11-26 17:31:03.393943] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:34.538 "name": "raid_bdev1", 00:34:34.538 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:34.538 "strip_size_kb": 0, 00:34:34.538 "state": "online", 00:34:34.538 "raid_level": "raid1", 00:34:34.538 "superblock": true, 00:34:34.538 "num_base_bdevs": 2, 00:34:34.538 "num_base_bdevs_discovered": 2, 00:34:34.538 "num_base_bdevs_operational": 2, 00:34:34.538 "process": { 00:34:34.538 "type": "rebuild", 00:34:34.538 "target": "spare", 00:34:34.538 "progress": { 00:34:34.538 "blocks": 2560, 00:34:34.538 "percent": 32 00:34:34.538 } 00:34:34.538 }, 00:34:34.538 "base_bdevs_list": [ 00:34:34.538 { 00:34:34.538 "name": "spare", 00:34:34.538 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:34.538 "is_configured": true, 00:34:34.538 "data_offset": 256, 00:34:34.538 "data_size": 7936 00:34:34.538 }, 00:34:34.538 { 00:34:34.538 "name": "BaseBdev2", 00:34:34.538 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:34.538 "is_configured": true, 00:34:34.538 "data_offset": 256, 00:34:34.538 "data_size": 7936 00:34:34.538 } 00:34:34.538 ] 00:34:34.538 }' 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:34.538 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.539 [2024-11-26 17:31:04.546059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.539 [2024-11-26 17:31:04.601673] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:34.539 [2024-11-26 17:31:04.601812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.539 [2024-11-26 17:31:04.601830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.539 [2024-11-26 17:31:04.601857] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.539 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.797 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.797 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.797 "name": "raid_bdev1", 00:34:34.797 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:34.797 "strip_size_kb": 0, 00:34:34.797 "state": "online", 00:34:34.797 "raid_level": "raid1", 00:34:34.797 "superblock": true, 00:34:34.797 "num_base_bdevs": 2, 00:34:34.798 "num_base_bdevs_discovered": 1, 00:34:34.798 "num_base_bdevs_operational": 1, 00:34:34.798 "base_bdevs_list": [ 00:34:34.798 { 00:34:34.798 "name": null, 00:34:34.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.798 "is_configured": false, 00:34:34.798 "data_offset": 0, 00:34:34.798 "data_size": 7936 00:34:34.798 }, 00:34:34.798 { 00:34:34.798 "name": "BaseBdev2", 00:34:34.798 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:34.798 "is_configured": true, 00:34:34.798 "data_offset": 256, 00:34:34.798 "data_size": 7936 00:34:34.798 } 00:34:34.798 ] 00:34:34.798 }' 00:34:34.798 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.798 17:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:35.056 17:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:35.056 17:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.056 17:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:35.056 [2024-11-26 17:31:05.051285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:35.056 [2024-11-26 17:31:05.051365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:35.056 [2024-11-26 17:31:05.051395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:35.056 [2024-11-26 17:31:05.051409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:35.056 [2024-11-26 17:31:05.051721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:35.056 [2024-11-26 17:31:05.051748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:35.056 [2024-11-26 17:31:05.051820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:35.056 [2024-11-26 17:31:05.051837] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:35.056 [2024-11-26 17:31:05.051849] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:35.056 [2024-11-26 17:31:05.051874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:35.056 [2024-11-26 17:31:05.066006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:34:35.056 spare 00:34:35.056 17:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.056 17:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:35.056 [2024-11-26 17:31:05.068274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.992 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:36.251 "name": "raid_bdev1", 00:34:36.251 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:36.251 "strip_size_kb": 0, 00:34:36.251 "state": "online", 00:34:36.251 "raid_level": "raid1", 00:34:36.251 "superblock": true, 00:34:36.251 "num_base_bdevs": 2, 00:34:36.251 "num_base_bdevs_discovered": 2, 00:34:36.251 "num_base_bdevs_operational": 2, 00:34:36.251 "process": { 00:34:36.251 "type": "rebuild", 00:34:36.251 "target": "spare", 00:34:36.251 "progress": { 00:34:36.251 "blocks": 2560, 00:34:36.251 "percent": 32 00:34:36.251 } 00:34:36.251 }, 00:34:36.251 "base_bdevs_list": [ 00:34:36.251 { 00:34:36.251 "name": "spare", 00:34:36.251 "uuid": "c4d1eb45-233f-52d6-95af-b707607c25b0", 00:34:36.251 "is_configured": true, 00:34:36.251 "data_offset": 256, 00:34:36.251 "data_size": 7936 00:34:36.251 }, 00:34:36.251 { 00:34:36.251 "name": "BaseBdev2", 00:34:36.251 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:36.251 "is_configured": true, 00:34:36.251 "data_offset": 256, 00:34:36.251 "data_size": 7936 00:34:36.251 } 00:34:36.251 ] 00:34:36.251 }' 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.251 [2024-11-26 17:31:06.220712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.251 [2024-11-26 17:31:06.276217] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:36.251 [2024-11-26 17:31:06.276289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:36.251 [2024-11-26 17:31:06.276310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.251 [2024-11-26 17:31:06.276319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:36.251 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:36.252 "name": "raid_bdev1", 00:34:36.252 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:36.252 "strip_size_kb": 0, 00:34:36.252 "state": "online", 00:34:36.252 "raid_level": "raid1", 00:34:36.252 "superblock": true, 00:34:36.252 "num_base_bdevs": 2, 00:34:36.252 "num_base_bdevs_discovered": 1, 00:34:36.252 "num_base_bdevs_operational": 1, 00:34:36.252 "base_bdevs_list": [ 00:34:36.252 { 00:34:36.252 "name": null, 00:34:36.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.252 "is_configured": false, 00:34:36.252 "data_offset": 0, 00:34:36.252 "data_size": 7936 00:34:36.252 }, 00:34:36.252 { 00:34:36.252 "name": "BaseBdev2", 00:34:36.252 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:36.252 "is_configured": true, 00:34:36.252 "data_offset": 256, 00:34:36.252 "data_size": 7936 00:34:36.252 } 00:34:36.252 ] 00:34:36.252 }' 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:36.252 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:36.820 "name": "raid_bdev1", 00:34:36.820 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:36.820 "strip_size_kb": 0, 00:34:36.820 "state": "online", 00:34:36.820 "raid_level": "raid1", 00:34:36.820 "superblock": true, 00:34:36.820 "num_base_bdevs": 2, 00:34:36.820 "num_base_bdevs_discovered": 1, 00:34:36.820 "num_base_bdevs_operational": 1, 00:34:36.820 "base_bdevs_list": [ 00:34:36.820 { 00:34:36.820 "name": null, 00:34:36.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.820 "is_configured": false, 00:34:36.820 "data_offset": 0, 00:34:36.820 "data_size": 7936 00:34:36.820 }, 00:34:36.820 { 00:34:36.820 "name": "BaseBdev2", 00:34:36.820 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:36.820 "is_configured": true, 00:34:36.820 "data_offset": 256, 00:34:36.820 "data_size": 7936 00:34:36.820 } 00:34:36.820 ] 00:34:36.820 }' 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.820 [2024-11-26 17:31:06.905834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:36.820 [2024-11-26 17:31:06.905907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.820 [2024-11-26 17:31:06.905933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:36.820 [2024-11-26 17:31:06.905945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.820 [2024-11-26 17:31:06.906249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.820 [2024-11-26 17:31:06.906269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:36.820 [2024-11-26 17:31:06.906329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:36.820 [2024-11-26 17:31:06.906344] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:36.820 [2024-11-26 17:31:06.906356] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:36.820 [2024-11-26 17:31:06.906369] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:36.820 BaseBdev1 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.820 17:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.198 "name": "raid_bdev1", 00:34:38.198 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:38.198 "strip_size_kb": 0, 00:34:38.198 "state": "online", 00:34:38.198 "raid_level": "raid1", 00:34:38.198 "superblock": true, 00:34:38.198 "num_base_bdevs": 2, 00:34:38.198 "num_base_bdevs_discovered": 1, 00:34:38.198 "num_base_bdevs_operational": 1, 00:34:38.198 "base_bdevs_list": [ 00:34:38.198 { 00:34:38.198 "name": null, 00:34:38.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.198 "is_configured": false, 00:34:38.198 "data_offset": 0, 00:34:38.198 "data_size": 7936 00:34:38.198 }, 00:34:38.198 { 00:34:38.198 "name": "BaseBdev2", 00:34:38.198 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:38.198 "is_configured": true, 00:34:38.198 "data_offset": 256, 00:34:38.198 "data_size": 7936 00:34:38.198 } 00:34:38.198 ] 00:34:38.198 }' 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.198 17:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:38.457 "name": "raid_bdev1", 00:34:38.457 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:38.457 "strip_size_kb": 0, 00:34:38.457 "state": "online", 00:34:38.457 "raid_level": "raid1", 00:34:38.457 "superblock": true, 00:34:38.457 "num_base_bdevs": 2, 00:34:38.457 "num_base_bdevs_discovered": 1, 00:34:38.457 "num_base_bdevs_operational": 1, 00:34:38.457 "base_bdevs_list": [ 00:34:38.457 { 00:34:38.457 "name": null, 00:34:38.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.457 "is_configured": false, 00:34:38.457 "data_offset": 0, 00:34:38.457 "data_size": 7936 00:34:38.457 }, 00:34:38.457 { 00:34:38.457 "name": "BaseBdev2", 00:34:38.457 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:38.457 "is_configured": true, 00:34:38.457 "data_offset": 256, 00:34:38.457 "data_size": 7936 00:34:38.457 } 00:34:38.457 ] 00:34:38.457 }' 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.457 [2024-11-26 17:31:08.496369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:38.457 [2024-11-26 17:31:08.496591] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:38.457 [2024-11-26 17:31:08.496612] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:38.457 request: 00:34:38.457 { 00:34:38.457 "base_bdev": "BaseBdev1", 00:34:38.457 "raid_bdev": "raid_bdev1", 00:34:38.457 "method": "bdev_raid_add_base_bdev", 00:34:38.457 "req_id": 1 00:34:38.457 } 00:34:38.457 Got JSON-RPC error response 00:34:38.457 response: 00:34:38.457 { 00:34:38.457 "code": -22, 00:34:38.457 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:38.457 } 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.457 17:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.832 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.832 "name": "raid_bdev1", 00:34:39.832 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:39.832 "strip_size_kb": 0, 00:34:39.832 "state": "online", 00:34:39.832 "raid_level": "raid1", 00:34:39.832 "superblock": true, 00:34:39.832 "num_base_bdevs": 2, 00:34:39.832 "num_base_bdevs_discovered": 1, 00:34:39.832 "num_base_bdevs_operational": 1, 00:34:39.832 "base_bdevs_list": [ 00:34:39.832 { 00:34:39.832 "name": null, 00:34:39.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.832 "is_configured": false, 00:34:39.832 "data_offset": 0, 00:34:39.832 "data_size": 7936 00:34:39.832 }, 00:34:39.832 { 00:34:39.832 "name": "BaseBdev2", 00:34:39.832 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:39.832 "is_configured": true, 00:34:39.832 "data_offset": 256, 00:34:39.832 "data_size": 7936 00:34:39.832 } 00:34:39.832 ] 00:34:39.832 }' 00:34:39.833 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.833 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.102 17:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:40.102 "name": "raid_bdev1", 00:34:40.102 "uuid": "fc302d51-ab6d-4583-adbd-921e018e0807", 00:34:40.102 "strip_size_kb": 0, 00:34:40.102 "state": "online", 00:34:40.102 "raid_level": "raid1", 00:34:40.102 "superblock": true, 00:34:40.102 "num_base_bdevs": 2, 00:34:40.102 "num_base_bdevs_discovered": 1, 00:34:40.102 "num_base_bdevs_operational": 1, 00:34:40.102 "base_bdevs_list": [ 00:34:40.102 { 00:34:40.102 "name": null, 00:34:40.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.102 "is_configured": false, 00:34:40.102 "data_offset": 0, 00:34:40.102 "data_size": 7936 00:34:40.102 }, 00:34:40.102 { 00:34:40.102 "name": "BaseBdev2", 00:34:40.102 "uuid": "647b7740-6c1d-560b-b913-e4ab93cdeb66", 00:34:40.102 "is_configured": true, 00:34:40.102 "data_offset": 256, 00:34:40.102 "data_size": 7936 00:34:40.102 } 00:34:40.102 ] 00:34:40.102 }' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87939 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87939 ']' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87939 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87939 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:40.102 killing process with pid 87939 00:34:40.102 Received shutdown signal, test time was about 60.000000 seconds 00:34:40.102 00:34:40.102 Latency(us) 00:34:40.102 [2024-11-26T17:31:10.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.102 [2024-11-26T17:31:10.216Z] =================================================================================================================== 00:34:40.102 [2024-11-26T17:31:10.216Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87939' 00:34:40.102 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87939 00:34:40.103 [2024-11-26 17:31:10.138978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:40.103 17:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87939 00:34:40.103 [2024-11-26 17:31:10.139122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:40.103 [2024-11-26 17:31:10.139174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:40.103 [2024-11-26 17:31:10.139189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:40.383 [2024-11-26 17:31:10.484866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:41.762 17:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:34:41.762 00:34:41.762 real 0m20.057s 00:34:41.762 user 0m25.776s 00:34:41.762 sys 0m3.156s 00:34:41.762 17:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.762 17:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:41.762 ************************************ 00:34:41.762 END TEST raid_rebuild_test_sb_md_separate 00:34:41.762 ************************************ 00:34:41.762 17:31:11 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:34:41.762 17:31:11 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:34:41.762 17:31:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:41.762 17:31:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.762 17:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:41.762 ************************************ 00:34:41.762 START TEST raid_state_function_test_sb_md_interleaved 00:34:41.762 ************************************ 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:41.762 Process raid pid: 88634 00:34:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88634 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88634' 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88634 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88634 ']' 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.762 17:31:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:42.021 [2024-11-26 17:31:11.887625] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:42.021 [2024-11-26 17:31:11.887977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.021 [2024-11-26 17:31:12.070740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.280 [2024-11-26 17:31:12.210029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.539 [2024-11-26 17:31:12.447899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:42.539 [2024-11-26 17:31:12.448158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:42.798 [2024-11-26 17:31:12.744335] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:42.798 [2024-11-26 17:31:12.744416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:42.798 [2024-11-26 17:31:12.744430] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:42.798 [2024-11-26 17:31:12.744446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:42.798 "name": "Existed_Raid", 00:34:42.798 "uuid": "1d578022-7458-4ac7-a385-41605041def0", 00:34:42.798 "strip_size_kb": 0, 00:34:42.798 "state": "configuring", 00:34:42.798 "raid_level": "raid1", 00:34:42.798 "superblock": true, 00:34:42.798 "num_base_bdevs": 2, 00:34:42.798 "num_base_bdevs_discovered": 0, 00:34:42.798 "num_base_bdevs_operational": 2, 00:34:42.798 "base_bdevs_list": [ 00:34:42.798 { 00:34:42.798 "name": "BaseBdev1", 00:34:42.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.798 "is_configured": false, 00:34:42.798 "data_offset": 0, 00:34:42.798 "data_size": 0 00:34:42.798 }, 00:34:42.798 { 00:34:42.798 "name": "BaseBdev2", 00:34:42.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.798 "is_configured": false, 00:34:42.798 "data_offset": 0, 00:34:42.798 "data_size": 0 00:34:42.798 } 00:34:42.798 ] 00:34:42.798 }' 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:42.798 17:31:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.368 [2024-11-26 17:31:13.211744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:43.368 [2024-11-26 17:31:13.211796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.368 [2024-11-26 17:31:13.223756] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:43.368 [2024-11-26 17:31:13.223828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:43.368 [2024-11-26 17:31:13.223842] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.368 [2024-11-26 17:31:13.223860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.368 [2024-11-26 17:31:13.277252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:43.368 BaseBdev1 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:43.368 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.369 [ 00:34:43.369 { 00:34:43.369 "name": "BaseBdev1", 00:34:43.369 "aliases": [ 00:34:43.369 "a69c4f2c-3f0c-4307-a479-827ca942d94b" 00:34:43.369 ], 00:34:43.369 "product_name": "Malloc disk", 00:34:43.369 "block_size": 4128, 00:34:43.369 "num_blocks": 8192, 00:34:43.369 "uuid": "a69c4f2c-3f0c-4307-a479-827ca942d94b", 00:34:43.369 "md_size": 32, 00:34:43.369 "md_interleave": true, 00:34:43.369 "dif_type": 0, 00:34:43.369 "assigned_rate_limits": { 00:34:43.369 "rw_ios_per_sec": 0, 00:34:43.369 "rw_mbytes_per_sec": 0, 00:34:43.369 "r_mbytes_per_sec": 0, 00:34:43.369 "w_mbytes_per_sec": 0 00:34:43.369 }, 00:34:43.369 "claimed": true, 00:34:43.369 "claim_type": "exclusive_write", 00:34:43.369 "zoned": false, 00:34:43.369 "supported_io_types": { 00:34:43.369 "read": true, 00:34:43.369 "write": true, 00:34:43.369 "unmap": true, 00:34:43.369 "flush": true, 00:34:43.369 "reset": true, 00:34:43.369 "nvme_admin": false, 00:34:43.369 "nvme_io": false, 00:34:43.369 "nvme_io_md": false, 00:34:43.369 "write_zeroes": true, 00:34:43.369 "zcopy": true, 00:34:43.369 "get_zone_info": false, 00:34:43.369 "zone_management": false, 00:34:43.369 "zone_append": false, 00:34:43.369 "compare": false, 00:34:43.369 "compare_and_write": false, 00:34:43.369 "abort": true, 00:34:43.369 "seek_hole": false, 00:34:43.369 "seek_data": false, 00:34:43.369 "copy": true, 00:34:43.369 "nvme_iov_md": false 00:34:43.369 }, 00:34:43.369 "memory_domains": [ 00:34:43.369 { 00:34:43.369 "dma_device_id": "system", 00:34:43.369 "dma_device_type": 1 00:34:43.369 }, 00:34:43.369 { 00:34:43.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.369 "dma_device_type": 2 00:34:43.369 } 00:34:43.369 ], 00:34:43.369 "driver_specific": {} 00:34:43.369 } 00:34:43.369 ] 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:43.369 "name": "Existed_Raid", 00:34:43.369 "uuid": "d5134ccb-3e51-491c-9214-7db489d4d494", 00:34:43.369 "strip_size_kb": 0, 00:34:43.369 "state": "configuring", 00:34:43.369 "raid_level": "raid1", 00:34:43.369 "superblock": true, 00:34:43.369 "num_base_bdevs": 2, 00:34:43.369 "num_base_bdevs_discovered": 1, 00:34:43.369 "num_base_bdevs_operational": 2, 00:34:43.369 "base_bdevs_list": [ 00:34:43.369 { 00:34:43.369 "name": "BaseBdev1", 00:34:43.369 "uuid": "a69c4f2c-3f0c-4307-a479-827ca942d94b", 00:34:43.369 "is_configured": true, 00:34:43.369 "data_offset": 256, 00:34:43.369 "data_size": 7936 00:34:43.369 }, 00:34:43.369 { 00:34:43.369 "name": "BaseBdev2", 00:34:43.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.369 "is_configured": false, 00:34:43.369 "data_offset": 0, 00:34:43.369 "data_size": 0 00:34:43.369 } 00:34:43.369 ] 00:34:43.369 }' 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:43.369 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:43.629 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.629 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 [2024-11-26 17:31:13.740699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:43.629 [2024-11-26 17:31:13.740764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.889 [2024-11-26 17:31:13.748744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:43.889 [2024-11-26 17:31:13.751043] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.889 [2024-11-26 17:31:13.751237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:43.889 "name": "Existed_Raid", 00:34:43.889 "uuid": "54beec98-1cfa-4a50-8927-16b597046717", 00:34:43.889 "strip_size_kb": 0, 00:34:43.889 "state": "configuring", 00:34:43.889 "raid_level": "raid1", 00:34:43.889 "superblock": true, 00:34:43.889 "num_base_bdevs": 2, 00:34:43.889 "num_base_bdevs_discovered": 1, 00:34:43.889 "num_base_bdevs_operational": 2, 00:34:43.889 "base_bdevs_list": [ 00:34:43.889 { 00:34:43.889 "name": "BaseBdev1", 00:34:43.889 "uuid": "a69c4f2c-3f0c-4307-a479-827ca942d94b", 00:34:43.889 "is_configured": true, 00:34:43.889 "data_offset": 256, 00:34:43.889 "data_size": 7936 00:34:43.889 }, 00:34:43.889 { 00:34:43.889 "name": "BaseBdev2", 00:34:43.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.889 "is_configured": false, 00:34:43.889 "data_offset": 0, 00:34:43.889 "data_size": 0 00:34:43.889 } 00:34:43.889 ] 00:34:43.889 }' 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:43.889 17:31:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.149 [2024-11-26 17:31:14.219567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:44.149 [2024-11-26 17:31:14.220059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:44.149 BaseBdev2 00:34:44.149 [2024-11-26 17:31:14.220196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:44.149 [2024-11-26 17:31:14.220318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:44.149 [2024-11-26 17:31:14.220410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:44.149 [2024-11-26 17:31:14.220426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:44.149 [2024-11-26 17:31:14.220503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.149 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.149 [ 00:34:44.149 { 00:34:44.149 "name": "BaseBdev2", 00:34:44.149 "aliases": [ 00:34:44.149 "81b68247-85a3-4690-93bd-4e897429544e" 00:34:44.149 ], 00:34:44.149 "product_name": "Malloc disk", 00:34:44.149 "block_size": 4128, 00:34:44.149 "num_blocks": 8192, 00:34:44.149 "uuid": "81b68247-85a3-4690-93bd-4e897429544e", 00:34:44.149 "md_size": 32, 00:34:44.149 "md_interleave": true, 00:34:44.149 "dif_type": 0, 00:34:44.149 "assigned_rate_limits": { 00:34:44.149 "rw_ios_per_sec": 0, 00:34:44.149 "rw_mbytes_per_sec": 0, 00:34:44.149 "r_mbytes_per_sec": 0, 00:34:44.149 "w_mbytes_per_sec": 0 00:34:44.149 }, 00:34:44.149 "claimed": true, 00:34:44.149 "claim_type": "exclusive_write", 00:34:44.149 "zoned": false, 00:34:44.149 "supported_io_types": { 00:34:44.149 "read": true, 00:34:44.149 "write": true, 00:34:44.408 "unmap": true, 00:34:44.408 "flush": true, 00:34:44.408 "reset": true, 00:34:44.408 "nvme_admin": false, 00:34:44.408 "nvme_io": false, 00:34:44.408 "nvme_io_md": false, 00:34:44.408 "write_zeroes": true, 00:34:44.408 "zcopy": true, 00:34:44.408 "get_zone_info": false, 00:34:44.408 "zone_management": false, 00:34:44.408 "zone_append": false, 00:34:44.408 "compare": false, 00:34:44.408 "compare_and_write": false, 00:34:44.408 "abort": true, 00:34:44.408 "seek_hole": false, 00:34:44.408 "seek_data": false, 00:34:44.408 "copy": true, 00:34:44.408 "nvme_iov_md": false 00:34:44.408 }, 00:34:44.408 "memory_domains": [ 00:34:44.408 { 00:34:44.408 "dma_device_id": "system", 00:34:44.408 "dma_device_type": 1 00:34:44.408 }, 00:34:44.408 { 00:34:44.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.409 "dma_device_type": 2 00:34:44.409 } 00:34:44.409 ], 00:34:44.409 "driver_specific": {} 00:34:44.409 } 00:34:44.409 ] 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.409 "name": "Existed_Raid", 00:34:44.409 "uuid": "54beec98-1cfa-4a50-8927-16b597046717", 00:34:44.409 "strip_size_kb": 0, 00:34:44.409 "state": "online", 00:34:44.409 "raid_level": "raid1", 00:34:44.409 "superblock": true, 00:34:44.409 "num_base_bdevs": 2, 00:34:44.409 "num_base_bdevs_discovered": 2, 00:34:44.409 "num_base_bdevs_operational": 2, 00:34:44.409 "base_bdevs_list": [ 00:34:44.409 { 00:34:44.409 "name": "BaseBdev1", 00:34:44.409 "uuid": "a69c4f2c-3f0c-4307-a479-827ca942d94b", 00:34:44.409 "is_configured": true, 00:34:44.409 "data_offset": 256, 00:34:44.409 "data_size": 7936 00:34:44.409 }, 00:34:44.409 { 00:34:44.409 "name": "BaseBdev2", 00:34:44.409 "uuid": "81b68247-85a3-4690-93bd-4e897429544e", 00:34:44.409 "is_configured": true, 00:34:44.409 "data_offset": 256, 00:34:44.409 "data_size": 7936 00:34:44.409 } 00:34:44.409 ] 00:34:44.409 }' 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.409 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:44.668 [2024-11-26 17:31:14.699229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.668 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:44.668 "name": "Existed_Raid", 00:34:44.668 "aliases": [ 00:34:44.668 "54beec98-1cfa-4a50-8927-16b597046717" 00:34:44.668 ], 00:34:44.668 "product_name": "Raid Volume", 00:34:44.668 "block_size": 4128, 00:34:44.668 "num_blocks": 7936, 00:34:44.668 "uuid": "54beec98-1cfa-4a50-8927-16b597046717", 00:34:44.668 "md_size": 32, 00:34:44.668 "md_interleave": true, 00:34:44.668 "dif_type": 0, 00:34:44.668 "assigned_rate_limits": { 00:34:44.668 "rw_ios_per_sec": 0, 00:34:44.668 "rw_mbytes_per_sec": 0, 00:34:44.668 "r_mbytes_per_sec": 0, 00:34:44.668 "w_mbytes_per_sec": 0 00:34:44.668 }, 00:34:44.668 "claimed": false, 00:34:44.668 "zoned": false, 00:34:44.668 "supported_io_types": { 00:34:44.669 "read": true, 00:34:44.669 "write": true, 00:34:44.669 "unmap": false, 00:34:44.669 "flush": false, 00:34:44.669 "reset": true, 00:34:44.669 "nvme_admin": false, 00:34:44.669 "nvme_io": false, 00:34:44.669 "nvme_io_md": false, 00:34:44.669 "write_zeroes": true, 00:34:44.669 "zcopy": false, 00:34:44.669 "get_zone_info": false, 00:34:44.669 "zone_management": false, 00:34:44.669 "zone_append": false, 00:34:44.669 "compare": false, 00:34:44.669 "compare_and_write": false, 00:34:44.669 "abort": false, 00:34:44.669 "seek_hole": false, 00:34:44.669 "seek_data": false, 00:34:44.669 "copy": false, 00:34:44.669 "nvme_iov_md": false 00:34:44.669 }, 00:34:44.669 "memory_domains": [ 00:34:44.669 { 00:34:44.669 "dma_device_id": "system", 00:34:44.669 "dma_device_type": 1 00:34:44.669 }, 00:34:44.669 { 00:34:44.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.669 "dma_device_type": 2 00:34:44.669 }, 00:34:44.669 { 00:34:44.669 "dma_device_id": "system", 00:34:44.669 "dma_device_type": 1 00:34:44.669 }, 00:34:44.669 { 00:34:44.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.669 "dma_device_type": 2 00:34:44.669 } 00:34:44.669 ], 00:34:44.669 "driver_specific": { 00:34:44.669 "raid": { 00:34:44.669 "uuid": "54beec98-1cfa-4a50-8927-16b597046717", 00:34:44.669 "strip_size_kb": 0, 00:34:44.669 "state": "online", 00:34:44.669 "raid_level": "raid1", 00:34:44.669 "superblock": true, 00:34:44.669 "num_base_bdevs": 2, 00:34:44.669 "num_base_bdevs_discovered": 2, 00:34:44.669 "num_base_bdevs_operational": 2, 00:34:44.669 "base_bdevs_list": [ 00:34:44.669 { 00:34:44.669 "name": "BaseBdev1", 00:34:44.669 "uuid": "a69c4f2c-3f0c-4307-a479-827ca942d94b", 00:34:44.669 "is_configured": true, 00:34:44.669 "data_offset": 256, 00:34:44.669 "data_size": 7936 00:34:44.669 }, 00:34:44.669 { 00:34:44.669 "name": "BaseBdev2", 00:34:44.669 "uuid": "81b68247-85a3-4690-93bd-4e897429544e", 00:34:44.669 "is_configured": true, 00:34:44.669 "data_offset": 256, 00:34:44.669 "data_size": 7936 00:34:44.669 } 00:34:44.669 ] 00:34:44.669 } 00:34:44.669 } 00:34:44.669 }' 00:34:44.669 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:44.669 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:44.669 BaseBdev2' 00:34:44.669 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.928 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:44.929 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:44.929 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:44.929 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.929 17:31:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.929 [2024-11-26 17:31:14.906704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.929 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.208 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.208 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.208 "name": "Existed_Raid", 00:34:45.208 "uuid": "54beec98-1cfa-4a50-8927-16b597046717", 00:34:45.208 "strip_size_kb": 0, 00:34:45.208 "state": "online", 00:34:45.208 "raid_level": "raid1", 00:34:45.208 "superblock": true, 00:34:45.208 "num_base_bdevs": 2, 00:34:45.208 "num_base_bdevs_discovered": 1, 00:34:45.208 "num_base_bdevs_operational": 1, 00:34:45.208 "base_bdevs_list": [ 00:34:45.208 { 00:34:45.208 "name": null, 00:34:45.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.208 "is_configured": false, 00:34:45.208 "data_offset": 0, 00:34:45.208 "data_size": 7936 00:34:45.208 }, 00:34:45.208 { 00:34:45.208 "name": "BaseBdev2", 00:34:45.208 "uuid": "81b68247-85a3-4690-93bd-4e897429544e", 00:34:45.208 "is_configured": true, 00:34:45.208 "data_offset": 256, 00:34:45.208 "data_size": 7936 00:34:45.208 } 00:34:45.208 ] 00:34:45.208 }' 00:34:45.208 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.208 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.466 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:45.466 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.467 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.467 [2024-11-26 17:31:15.486784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:45.467 [2024-11-26 17:31:15.486919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:45.726 [2024-11-26 17:31:15.588942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:45.726 [2024-11-26 17:31:15.589230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:45.726 [2024-11-26 17:31:15.589373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88634 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88634 ']' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88634 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88634 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:45.726 killing process with pid 88634 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88634' 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88634 00:34:45.726 [2024-11-26 17:31:15.690107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:45.726 17:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88634 00:34:45.726 [2024-11-26 17:31:15.709185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:47.100 17:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:34:47.100 00:34:47.100 real 0m5.124s 00:34:47.100 user 0m7.178s 00:34:47.100 sys 0m1.064s 00:34:47.100 17:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.100 ************************************ 00:34:47.100 END TEST raid_state_function_test_sb_md_interleaved 00:34:47.100 ************************************ 00:34:47.100 17:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.100 17:31:16 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:34:47.100 17:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:47.100 17:31:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.100 17:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:47.100 ************************************ 00:34:47.100 START TEST raid_superblock_test_md_interleaved 00:34:47.100 ************************************ 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88885 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88885 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88885 ']' 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.100 17:31:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.100 [2024-11-26 17:31:17.085050] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:47.100 [2024-11-26 17:31:17.085198] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88885 ] 00:34:47.358 [2024-11-26 17:31:17.270149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.358 [2024-11-26 17:31:17.387446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.615 [2024-11-26 17:31:17.614992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:47.615 [2024-11-26 17:31:17.615073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.873 malloc1 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.873 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.873 [2024-11-26 17:31:17.979555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:47.873 [2024-11-26 17:31:17.979776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.873 [2024-11-26 17:31:17.979817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:47.873 [2024-11-26 17:31:17.979832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.873 [2024-11-26 17:31:17.982126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.873 [2024-11-26 17:31:17.982173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:48.131 pt1 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.131 17:31:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.131 malloc2 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.131 [2024-11-26 17:31:18.041765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:48.131 [2024-11-26 17:31:18.041849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.131 [2024-11-26 17:31:18.041883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:48.131 [2024-11-26 17:31:18.041896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.131 [2024-11-26 17:31:18.044249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.131 [2024-11-26 17:31:18.044294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:48.131 pt2 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.131 [2024-11-26 17:31:18.053767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:48.131 [2024-11-26 17:31:18.056116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:48.131 [2024-11-26 17:31:18.056340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:48.131 [2024-11-26 17:31:18.056357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:48.131 [2024-11-26 17:31:18.056444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:48.131 [2024-11-26 17:31:18.056552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:48.131 [2024-11-26 17:31:18.056569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:48.131 [2024-11-26 17:31:18.056652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:48.131 "name": "raid_bdev1", 00:34:48.131 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:48.131 "strip_size_kb": 0, 00:34:48.131 "state": "online", 00:34:48.131 "raid_level": "raid1", 00:34:48.131 "superblock": true, 00:34:48.131 "num_base_bdevs": 2, 00:34:48.131 "num_base_bdevs_discovered": 2, 00:34:48.131 "num_base_bdevs_operational": 2, 00:34:48.131 "base_bdevs_list": [ 00:34:48.131 { 00:34:48.131 "name": "pt1", 00:34:48.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:48.131 "is_configured": true, 00:34:48.131 "data_offset": 256, 00:34:48.131 "data_size": 7936 00:34:48.131 }, 00:34:48.131 { 00:34:48.131 "name": "pt2", 00:34:48.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:48.131 "is_configured": true, 00:34:48.131 "data_offset": 256, 00:34:48.131 "data_size": 7936 00:34:48.131 } 00:34:48.131 ] 00:34:48.131 }' 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:48.131 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.696 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.696 [2024-11-26 17:31:18.513675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:48.697 "name": "raid_bdev1", 00:34:48.697 "aliases": [ 00:34:48.697 "b08a71b8-4aeb-4f63-b224-d845d53665cd" 00:34:48.697 ], 00:34:48.697 "product_name": "Raid Volume", 00:34:48.697 "block_size": 4128, 00:34:48.697 "num_blocks": 7936, 00:34:48.697 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:48.697 "md_size": 32, 00:34:48.697 "md_interleave": true, 00:34:48.697 "dif_type": 0, 00:34:48.697 "assigned_rate_limits": { 00:34:48.697 "rw_ios_per_sec": 0, 00:34:48.697 "rw_mbytes_per_sec": 0, 00:34:48.697 "r_mbytes_per_sec": 0, 00:34:48.697 "w_mbytes_per_sec": 0 00:34:48.697 }, 00:34:48.697 "claimed": false, 00:34:48.697 "zoned": false, 00:34:48.697 "supported_io_types": { 00:34:48.697 "read": true, 00:34:48.697 "write": true, 00:34:48.697 "unmap": false, 00:34:48.697 "flush": false, 00:34:48.697 "reset": true, 00:34:48.697 "nvme_admin": false, 00:34:48.697 "nvme_io": false, 00:34:48.697 "nvme_io_md": false, 00:34:48.697 "write_zeroes": true, 00:34:48.697 "zcopy": false, 00:34:48.697 "get_zone_info": false, 00:34:48.697 "zone_management": false, 00:34:48.697 "zone_append": false, 00:34:48.697 "compare": false, 00:34:48.697 "compare_and_write": false, 00:34:48.697 "abort": false, 00:34:48.697 "seek_hole": false, 00:34:48.697 "seek_data": false, 00:34:48.697 "copy": false, 00:34:48.697 "nvme_iov_md": false 00:34:48.697 }, 00:34:48.697 "memory_domains": [ 00:34:48.697 { 00:34:48.697 "dma_device_id": "system", 00:34:48.697 "dma_device_type": 1 00:34:48.697 }, 00:34:48.697 { 00:34:48.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.697 "dma_device_type": 2 00:34:48.697 }, 00:34:48.697 { 00:34:48.697 "dma_device_id": "system", 00:34:48.697 "dma_device_type": 1 00:34:48.697 }, 00:34:48.697 { 00:34:48.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.697 "dma_device_type": 2 00:34:48.697 } 00:34:48.697 ], 00:34:48.697 "driver_specific": { 00:34:48.697 "raid": { 00:34:48.697 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:48.697 "strip_size_kb": 0, 00:34:48.697 "state": "online", 00:34:48.697 "raid_level": "raid1", 00:34:48.697 "superblock": true, 00:34:48.697 "num_base_bdevs": 2, 00:34:48.697 "num_base_bdevs_discovered": 2, 00:34:48.697 "num_base_bdevs_operational": 2, 00:34:48.697 "base_bdevs_list": [ 00:34:48.697 { 00:34:48.697 "name": "pt1", 00:34:48.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:48.697 "is_configured": true, 00:34:48.697 "data_offset": 256, 00:34:48.697 "data_size": 7936 00:34:48.697 }, 00:34:48.697 { 00:34:48.697 "name": "pt2", 00:34:48.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:48.697 "is_configured": true, 00:34:48.697 "data_offset": 256, 00:34:48.697 "data_size": 7936 00:34:48.697 } 00:34:48.697 ] 00:34:48.697 } 00:34:48.697 } 00:34:48.697 }' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:48.697 pt2' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:48.697 [2024-11-26 17:31:18.745214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b08a71b8-4aeb-4f63-b224-d845d53665cd 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b08a71b8-4aeb-4f63-b224-d845d53665cd ']' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.697 [2024-11-26 17:31:18.788881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:48.697 [2024-11-26 17:31:18.788910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:48.697 [2024-11-26 17:31:18.789011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:48.697 [2024-11-26 17:31:18.789080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:48.697 [2024-11-26 17:31:18.789097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.697 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.955 [2024-11-26 17:31:18.916727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:48.955 [2024-11-26 17:31:18.919034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:48.955 [2024-11-26 17:31:18.919128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:48.955 [2024-11-26 17:31:18.919199] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:48.955 [2024-11-26 17:31:18.919221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:48.955 [2024-11-26 17:31:18.919237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:48.955 request: 00:34:48.955 { 00:34:48.955 "name": "raid_bdev1", 00:34:48.955 "raid_level": "raid1", 00:34:48.955 "base_bdevs": [ 00:34:48.955 "malloc1", 00:34:48.955 "malloc2" 00:34:48.955 ], 00:34:48.955 "superblock": false, 00:34:48.955 "method": "bdev_raid_create", 00:34:48.955 "req_id": 1 00:34:48.955 } 00:34:48.955 Got JSON-RPC error response 00:34:48.955 response: 00:34:48.955 { 00:34:48.955 "code": -17, 00:34:48.955 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:48.955 } 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:48.955 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.956 [2024-11-26 17:31:18.972667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:48.956 [2024-11-26 17:31:18.972737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.956 [2024-11-26 17:31:18.972769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:48.956 [2024-11-26 17:31:18.972789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.956 [2024-11-26 17:31:18.975238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.956 [2024-11-26 17:31:18.975285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:48.956 [2024-11-26 17:31:18.975356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:48.956 [2024-11-26 17:31:18.975424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:48.956 pt1 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.956 17:31:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.956 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.956 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:48.956 "name": "raid_bdev1", 00:34:48.956 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:48.956 "strip_size_kb": 0, 00:34:48.956 "state": "configuring", 00:34:48.956 "raid_level": "raid1", 00:34:48.956 "superblock": true, 00:34:48.956 "num_base_bdevs": 2, 00:34:48.956 "num_base_bdevs_discovered": 1, 00:34:48.956 "num_base_bdevs_operational": 2, 00:34:48.956 "base_bdevs_list": [ 00:34:48.956 { 00:34:48.956 "name": "pt1", 00:34:48.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:48.956 "is_configured": true, 00:34:48.956 "data_offset": 256, 00:34:48.956 "data_size": 7936 00:34:48.956 }, 00:34:48.956 { 00:34:48.956 "name": null, 00:34:48.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:48.956 "is_configured": false, 00:34:48.956 "data_offset": 256, 00:34:48.956 "data_size": 7936 00:34:48.956 } 00:34:48.956 ] 00:34:48.956 }' 00:34:48.956 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:48.956 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.518 [2024-11-26 17:31:19.388388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.518 [2024-11-26 17:31:19.388489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.518 [2024-11-26 17:31:19.388536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:49.518 [2024-11-26 17:31:19.388555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.518 [2024-11-26 17:31:19.388774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.518 [2024-11-26 17:31:19.388798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.518 [2024-11-26 17:31:19.388868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:49.518 [2024-11-26 17:31:19.388897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.518 [2024-11-26 17:31:19.388998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:49.518 [2024-11-26 17:31:19.389013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:49.518 [2024-11-26 17:31:19.389120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:49.518 [2024-11-26 17:31:19.389193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:49.518 [2024-11-26 17:31:19.389203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:49.518 [2024-11-26 17:31:19.389280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.518 pt2 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:49.518 "name": "raid_bdev1", 00:34:49.518 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:49.518 "strip_size_kb": 0, 00:34:49.518 "state": "online", 00:34:49.518 "raid_level": "raid1", 00:34:49.518 "superblock": true, 00:34:49.518 "num_base_bdevs": 2, 00:34:49.518 "num_base_bdevs_discovered": 2, 00:34:49.518 "num_base_bdevs_operational": 2, 00:34:49.518 "base_bdevs_list": [ 00:34:49.518 { 00:34:49.518 "name": "pt1", 00:34:49.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:49.518 "is_configured": true, 00:34:49.518 "data_offset": 256, 00:34:49.518 "data_size": 7936 00:34:49.518 }, 00:34:49.518 { 00:34:49.518 "name": "pt2", 00:34:49.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:49.518 "is_configured": true, 00:34:49.518 "data_offset": 256, 00:34:49.518 "data_size": 7936 00:34:49.518 } 00:34:49.518 ] 00:34:49.518 }' 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:49.518 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.775 [2024-11-26 17:31:19.828067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:49.775 "name": "raid_bdev1", 00:34:49.775 "aliases": [ 00:34:49.775 "b08a71b8-4aeb-4f63-b224-d845d53665cd" 00:34:49.775 ], 00:34:49.775 "product_name": "Raid Volume", 00:34:49.775 "block_size": 4128, 00:34:49.775 "num_blocks": 7936, 00:34:49.775 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:49.775 "md_size": 32, 00:34:49.775 "md_interleave": true, 00:34:49.775 "dif_type": 0, 00:34:49.775 "assigned_rate_limits": { 00:34:49.775 "rw_ios_per_sec": 0, 00:34:49.775 "rw_mbytes_per_sec": 0, 00:34:49.775 "r_mbytes_per_sec": 0, 00:34:49.775 "w_mbytes_per_sec": 0 00:34:49.775 }, 00:34:49.775 "claimed": false, 00:34:49.775 "zoned": false, 00:34:49.775 "supported_io_types": { 00:34:49.775 "read": true, 00:34:49.775 "write": true, 00:34:49.775 "unmap": false, 00:34:49.775 "flush": false, 00:34:49.775 "reset": true, 00:34:49.775 "nvme_admin": false, 00:34:49.775 "nvme_io": false, 00:34:49.775 "nvme_io_md": false, 00:34:49.775 "write_zeroes": true, 00:34:49.775 "zcopy": false, 00:34:49.775 "get_zone_info": false, 00:34:49.775 "zone_management": false, 00:34:49.775 "zone_append": false, 00:34:49.775 "compare": false, 00:34:49.775 "compare_and_write": false, 00:34:49.775 "abort": false, 00:34:49.775 "seek_hole": false, 00:34:49.775 "seek_data": false, 00:34:49.775 "copy": false, 00:34:49.775 "nvme_iov_md": false 00:34:49.775 }, 00:34:49.775 "memory_domains": [ 00:34:49.775 { 00:34:49.775 "dma_device_id": "system", 00:34:49.775 "dma_device_type": 1 00:34:49.775 }, 00:34:49.775 { 00:34:49.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:49.775 "dma_device_type": 2 00:34:49.775 }, 00:34:49.775 { 00:34:49.775 "dma_device_id": "system", 00:34:49.775 "dma_device_type": 1 00:34:49.775 }, 00:34:49.775 { 00:34:49.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:49.775 "dma_device_type": 2 00:34:49.775 } 00:34:49.775 ], 00:34:49.775 "driver_specific": { 00:34:49.775 "raid": { 00:34:49.775 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:49.775 "strip_size_kb": 0, 00:34:49.775 "state": "online", 00:34:49.775 "raid_level": "raid1", 00:34:49.775 "superblock": true, 00:34:49.775 "num_base_bdevs": 2, 00:34:49.775 "num_base_bdevs_discovered": 2, 00:34:49.775 "num_base_bdevs_operational": 2, 00:34:49.775 "base_bdevs_list": [ 00:34:49.775 { 00:34:49.775 "name": "pt1", 00:34:49.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:49.775 "is_configured": true, 00:34:49.775 "data_offset": 256, 00:34:49.775 "data_size": 7936 00:34:49.775 }, 00:34:49.775 { 00:34:49.775 "name": "pt2", 00:34:49.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:49.775 "is_configured": true, 00:34:49.775 "data_offset": 256, 00:34:49.775 "data_size": 7936 00:34:49.775 } 00:34:49.775 ] 00:34:49.775 } 00:34:49.775 } 00:34:49.775 }' 00:34:49.775 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:50.033 pt2' 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.033 17:31:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.033 [2024-11-26 17:31:20.063750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b08a71b8-4aeb-4f63-b224-d845d53665cd '!=' b08a71b8-4aeb-4f63-b224-d845d53665cd ']' 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:34:50.033 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.034 [2024-11-26 17:31:20.103452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.034 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.291 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.291 "name": "raid_bdev1", 00:34:50.291 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:50.291 "strip_size_kb": 0, 00:34:50.291 "state": "online", 00:34:50.291 "raid_level": "raid1", 00:34:50.291 "superblock": true, 00:34:50.291 "num_base_bdevs": 2, 00:34:50.291 "num_base_bdevs_discovered": 1, 00:34:50.291 "num_base_bdevs_operational": 1, 00:34:50.291 "base_bdevs_list": [ 00:34:50.291 { 00:34:50.291 "name": null, 00:34:50.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.291 "is_configured": false, 00:34:50.291 "data_offset": 0, 00:34:50.291 "data_size": 7936 00:34:50.291 }, 00:34:50.291 { 00:34:50.291 "name": "pt2", 00:34:50.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.291 "is_configured": true, 00:34:50.291 "data_offset": 256, 00:34:50.291 "data_size": 7936 00:34:50.291 } 00:34:50.291 ] 00:34:50.291 }' 00:34:50.291 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.291 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.548 [2024-11-26 17:31:20.526841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:50.548 [2024-11-26 17:31:20.526877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:50.548 [2024-11-26 17:31:20.526976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:50.548 [2024-11-26 17:31:20.527040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:50.548 [2024-11-26 17:31:20.527057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.548 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.548 [2024-11-26 17:31:20.598735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:50.548 [2024-11-26 17:31:20.598806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.548 [2024-11-26 17:31:20.598827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:50.548 [2024-11-26 17:31:20.598853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.548 [2024-11-26 17:31:20.601280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.548 [2024-11-26 17:31:20.601463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:50.548 [2024-11-26 17:31:20.601558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:50.548 [2024-11-26 17:31:20.601627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:50.548 [2024-11-26 17:31:20.601718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:50.548 [2024-11-26 17:31:20.601736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:50.548 [2024-11-26 17:31:20.601848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:50.549 [2024-11-26 17:31:20.601921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:50.549 [2024-11-26 17:31:20.601932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:50.549 [2024-11-26 17:31:20.602003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:50.549 pt2 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.549 "name": "raid_bdev1", 00:34:50.549 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:50.549 "strip_size_kb": 0, 00:34:50.549 "state": "online", 00:34:50.549 "raid_level": "raid1", 00:34:50.549 "superblock": true, 00:34:50.549 "num_base_bdevs": 2, 00:34:50.549 "num_base_bdevs_discovered": 1, 00:34:50.549 "num_base_bdevs_operational": 1, 00:34:50.549 "base_bdevs_list": [ 00:34:50.549 { 00:34:50.549 "name": null, 00:34:50.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.549 "is_configured": false, 00:34:50.549 "data_offset": 256, 00:34:50.549 "data_size": 7936 00:34:50.549 }, 00:34:50.549 { 00:34:50.549 "name": "pt2", 00:34:50.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.549 "is_configured": true, 00:34:50.549 "data_offset": 256, 00:34:50.549 "data_size": 7936 00:34:50.549 } 00:34:50.549 ] 00:34:50.549 }' 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.549 17:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.160 [2024-11-26 17:31:21.014669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:51.160 [2024-11-26 17:31:21.014874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:51.160 [2024-11-26 17:31:21.014997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:51.160 [2024-11-26 17:31:21.015070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:51.160 [2024-11-26 17:31:21.015083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.160 [2024-11-26 17:31:21.074753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:51.160 [2024-11-26 17:31:21.074959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.160 [2024-11-26 17:31:21.075032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:51.160 [2024-11-26 17:31:21.075123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.160 [2024-11-26 17:31:21.077626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.160 [2024-11-26 17:31:21.077779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:51.160 [2024-11-26 17:31:21.078041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:51.160 [2024-11-26 17:31:21.078181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:51.160 [2024-11-26 17:31:21.078353] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:51.160 [2024-11-26 17:31:21.078471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:51.160 [2024-11-26 17:31:21.078541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:34:51.160 [2024-11-26 17:31:21.078800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:51.160 [2024-11-26 17:31:21.078916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:34:51.160 [2024-11-26 17:31:21.078927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:51.160 [2024-11-26 17:31:21.079012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:51.160 [2024-11-26 17:31:21.079078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:34:51.160 [2024-11-26 17:31:21.079090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:34:51.160 [2024-11-26 17:31:21.079224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.160 pt1 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.160 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.161 "name": "raid_bdev1", 00:34:51.161 "uuid": "b08a71b8-4aeb-4f63-b224-d845d53665cd", 00:34:51.161 "strip_size_kb": 0, 00:34:51.161 "state": "online", 00:34:51.161 "raid_level": "raid1", 00:34:51.161 "superblock": true, 00:34:51.161 "num_base_bdevs": 2, 00:34:51.161 "num_base_bdevs_discovered": 1, 00:34:51.161 "num_base_bdevs_operational": 1, 00:34:51.161 "base_bdevs_list": [ 00:34:51.161 { 00:34:51.161 "name": null, 00:34:51.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.161 "is_configured": false, 00:34:51.161 "data_offset": 256, 00:34:51.161 "data_size": 7936 00:34:51.161 }, 00:34:51.161 { 00:34:51.161 "name": "pt2", 00:34:51.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.161 "is_configured": true, 00:34:51.161 "data_offset": 256, 00:34:51.161 "data_size": 7936 00:34:51.161 } 00:34:51.161 ] 00:34:51.161 }' 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.161 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.419 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.679 [2024-11-26 17:31:21.542874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b08a71b8-4aeb-4f63-b224-d845d53665cd '!=' b08a71b8-4aeb-4f63-b224-d845d53665cd ']' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88885 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88885 ']' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88885 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88885 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:51.679 killing process with pid 88885 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88885' 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88885 00:34:51.679 [2024-11-26 17:31:21.619162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:51.679 [2024-11-26 17:31:21.619274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:51.679 17:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88885 00:34:51.679 [2024-11-26 17:31:21.619337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:51.679 [2024-11-26 17:31:21.619358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:34:51.939 [2024-11-26 17:31:21.844409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:53.315 17:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:34:53.315 00:34:53.315 real 0m6.039s 00:34:53.315 user 0m9.003s 00:34:53.315 sys 0m1.263s 00:34:53.315 17:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.315 ************************************ 00:34:53.315 END TEST raid_superblock_test_md_interleaved 00:34:53.315 ************************************ 00:34:53.315 17:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:53.315 17:31:23 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:34:53.315 17:31:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:53.315 17:31:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.315 17:31:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:53.315 ************************************ 00:34:53.315 START TEST raid_rebuild_test_sb_md_interleaved 00:34:53.315 ************************************ 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89209 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89209 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89209 ']' 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.315 17:31:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:53.315 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:53.315 Zero copy mechanism will not be used. 00:34:53.315 [2024-11-26 17:31:23.215269] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:34:53.315 [2024-11-26 17:31:23.215412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89209 ] 00:34:53.315 [2024-11-26 17:31:23.388331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.574 [2024-11-26 17:31:23.504991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.833 [2024-11-26 17:31:23.732903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:53.833 [2024-11-26 17:31:23.733252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.092 BaseBdev1_malloc 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.092 [2024-11-26 17:31:24.122815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:54.092 [2024-11-26 17:31:24.122896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.092 [2024-11-26 17:31:24.122924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:54.092 [2024-11-26 17:31:24.122941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.092 [2024-11-26 17:31:24.125265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.092 [2024-11-26 17:31:24.125318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:54.092 BaseBdev1 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.092 BaseBdev2_malloc 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.092 [2024-11-26 17:31:24.188336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:54.092 [2024-11-26 17:31:24.188415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.092 [2024-11-26 17:31:24.188441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:54.092 [2024-11-26 17:31:24.188460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.092 [2024-11-26 17:31:24.190732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.092 [2024-11-26 17:31:24.190916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:54.092 BaseBdev2 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.092 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.352 spare_malloc 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.352 spare_delay 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.352 [2024-11-26 17:31:24.276486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:54.352 [2024-11-26 17:31:24.276728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.352 [2024-11-26 17:31:24.276766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:54.352 [2024-11-26 17:31:24.276783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.352 [2024-11-26 17:31:24.279191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.352 [2024-11-26 17:31:24.279244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:54.352 spare 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.352 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.352 [2024-11-26 17:31:24.288540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:54.352 [2024-11-26 17:31:24.290811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:54.352 [2024-11-26 17:31:24.291039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:54.352 [2024-11-26 17:31:24.291059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:54.352 [2024-11-26 17:31:24.291143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:54.352 [2024-11-26 17:31:24.291228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:54.352 [2024-11-26 17:31:24.291238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:54.352 [2024-11-26 17:31:24.291320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.353 "name": "raid_bdev1", 00:34:54.353 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:54.353 "strip_size_kb": 0, 00:34:54.353 "state": "online", 00:34:54.353 "raid_level": "raid1", 00:34:54.353 "superblock": true, 00:34:54.353 "num_base_bdevs": 2, 00:34:54.353 "num_base_bdevs_discovered": 2, 00:34:54.353 "num_base_bdevs_operational": 2, 00:34:54.353 "base_bdevs_list": [ 00:34:54.353 { 00:34:54.353 "name": "BaseBdev1", 00:34:54.353 "uuid": "a2d1ae3a-32fe-578b-95be-850ee83f66e0", 00:34:54.353 "is_configured": true, 00:34:54.353 "data_offset": 256, 00:34:54.353 "data_size": 7936 00:34:54.353 }, 00:34:54.353 { 00:34:54.353 "name": "BaseBdev2", 00:34:54.353 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:54.353 "is_configured": true, 00:34:54.353 "data_offset": 256, 00:34:54.353 "data_size": 7936 00:34:54.353 } 00:34:54.353 ] 00:34:54.353 }' 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.353 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.921 [2024-11-26 17:31:24.748204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.921 [2024-11-26 17:31:24.835739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.921 "name": "raid_bdev1", 00:34:54.921 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:54.921 "strip_size_kb": 0, 00:34:54.921 "state": "online", 00:34:54.921 "raid_level": "raid1", 00:34:54.921 "superblock": true, 00:34:54.921 "num_base_bdevs": 2, 00:34:54.921 "num_base_bdevs_discovered": 1, 00:34:54.921 "num_base_bdevs_operational": 1, 00:34:54.921 "base_bdevs_list": [ 00:34:54.921 { 00:34:54.921 "name": null, 00:34:54.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.921 "is_configured": false, 00:34:54.921 "data_offset": 0, 00:34:54.921 "data_size": 7936 00:34:54.921 }, 00:34:54.921 { 00:34:54.921 "name": "BaseBdev2", 00:34:54.921 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:54.921 "is_configured": true, 00:34:54.921 "data_offset": 256, 00:34:54.921 "data_size": 7936 00:34:54.921 } 00:34:54.921 ] 00:34:54.921 }' 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.921 17:31:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:55.181 17:31:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:55.181 17:31:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.181 17:31:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:55.181 [2024-11-26 17:31:25.263355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:55.181 [2024-11-26 17:31:25.284676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:55.182 17:31:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.182 17:31:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:55.182 [2024-11-26 17:31:25.287170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:56.561 "name": "raid_bdev1", 00:34:56.561 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:56.561 "strip_size_kb": 0, 00:34:56.561 "state": "online", 00:34:56.561 "raid_level": "raid1", 00:34:56.561 "superblock": true, 00:34:56.561 "num_base_bdevs": 2, 00:34:56.561 "num_base_bdevs_discovered": 2, 00:34:56.561 "num_base_bdevs_operational": 2, 00:34:56.561 "process": { 00:34:56.561 "type": "rebuild", 00:34:56.561 "target": "spare", 00:34:56.561 "progress": { 00:34:56.561 "blocks": 2560, 00:34:56.561 "percent": 32 00:34:56.561 } 00:34:56.561 }, 00:34:56.561 "base_bdevs_list": [ 00:34:56.561 { 00:34:56.561 "name": "spare", 00:34:56.561 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:34:56.561 "is_configured": true, 00:34:56.561 "data_offset": 256, 00:34:56.561 "data_size": 7936 00:34:56.561 }, 00:34:56.561 { 00:34:56.561 "name": "BaseBdev2", 00:34:56.561 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:56.561 "is_configured": true, 00:34:56.561 "data_offset": 256, 00:34:56.561 "data_size": 7936 00:34:56.561 } 00:34:56.561 ] 00:34:56.561 }' 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:56.561 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.562 [2024-11-26 17:31:26.434966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:56.562 [2024-11-26 17:31:26.495653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:56.562 [2024-11-26 17:31:26.495735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:56.562 [2024-11-26 17:31:26.495756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:56.562 [2024-11-26 17:31:26.495776] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:56.562 "name": "raid_bdev1", 00:34:56.562 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:56.562 "strip_size_kb": 0, 00:34:56.562 "state": "online", 00:34:56.562 "raid_level": "raid1", 00:34:56.562 "superblock": true, 00:34:56.562 "num_base_bdevs": 2, 00:34:56.562 "num_base_bdevs_discovered": 1, 00:34:56.562 "num_base_bdevs_operational": 1, 00:34:56.562 "base_bdevs_list": [ 00:34:56.562 { 00:34:56.562 "name": null, 00:34:56.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.562 "is_configured": false, 00:34:56.562 "data_offset": 0, 00:34:56.562 "data_size": 7936 00:34:56.562 }, 00:34:56.562 { 00:34:56.562 "name": "BaseBdev2", 00:34:56.562 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:56.562 "is_configured": true, 00:34:56.562 "data_offset": 256, 00:34:56.562 "data_size": 7936 00:34:56.562 } 00:34:56.562 ] 00:34:56.562 }' 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:56.562 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:57.134 "name": "raid_bdev1", 00:34:57.134 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:57.134 "strip_size_kb": 0, 00:34:57.134 "state": "online", 00:34:57.134 "raid_level": "raid1", 00:34:57.134 "superblock": true, 00:34:57.134 "num_base_bdevs": 2, 00:34:57.134 "num_base_bdevs_discovered": 1, 00:34:57.134 "num_base_bdevs_operational": 1, 00:34:57.134 "base_bdevs_list": [ 00:34:57.134 { 00:34:57.134 "name": null, 00:34:57.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.134 "is_configured": false, 00:34:57.134 "data_offset": 0, 00:34:57.134 "data_size": 7936 00:34:57.134 }, 00:34:57.134 { 00:34:57.134 "name": "BaseBdev2", 00:34:57.134 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:57.134 "is_configured": true, 00:34:57.134 "data_offset": 256, 00:34:57.134 "data_size": 7936 00:34:57.134 } 00:34:57.134 ] 00:34:57.134 }' 00:34:57.134 17:31:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.134 [2024-11-26 17:31:27.076859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:57.134 [2024-11-26 17:31:27.095341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.134 17:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:57.134 [2024-11-26 17:31:27.097769] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:58.070 "name": "raid_bdev1", 00:34:58.070 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:58.070 "strip_size_kb": 0, 00:34:58.070 "state": "online", 00:34:58.070 "raid_level": "raid1", 00:34:58.070 "superblock": true, 00:34:58.070 "num_base_bdevs": 2, 00:34:58.070 "num_base_bdevs_discovered": 2, 00:34:58.070 "num_base_bdevs_operational": 2, 00:34:58.070 "process": { 00:34:58.070 "type": "rebuild", 00:34:58.070 "target": "spare", 00:34:58.070 "progress": { 00:34:58.070 "blocks": 2560, 00:34:58.070 "percent": 32 00:34:58.070 } 00:34:58.070 }, 00:34:58.070 "base_bdevs_list": [ 00:34:58.070 { 00:34:58.070 "name": "spare", 00:34:58.070 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:34:58.070 "is_configured": true, 00:34:58.070 "data_offset": 256, 00:34:58.070 "data_size": 7936 00:34:58.070 }, 00:34:58.070 { 00:34:58.070 "name": "BaseBdev2", 00:34:58.070 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:58.070 "is_configured": true, 00:34:58.070 "data_offset": 256, 00:34:58.070 "data_size": 7936 00:34:58.070 } 00:34:58.070 ] 00:34:58.070 }' 00:34:58.070 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:58.330 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=754 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:58.330 "name": "raid_bdev1", 00:34:58.330 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:58.330 "strip_size_kb": 0, 00:34:58.330 "state": "online", 00:34:58.330 "raid_level": "raid1", 00:34:58.330 "superblock": true, 00:34:58.330 "num_base_bdevs": 2, 00:34:58.330 "num_base_bdevs_discovered": 2, 00:34:58.330 "num_base_bdevs_operational": 2, 00:34:58.330 "process": { 00:34:58.330 "type": "rebuild", 00:34:58.330 "target": "spare", 00:34:58.330 "progress": { 00:34:58.330 "blocks": 2816, 00:34:58.330 "percent": 35 00:34:58.330 } 00:34:58.330 }, 00:34:58.330 "base_bdevs_list": [ 00:34:58.330 { 00:34:58.330 "name": "spare", 00:34:58.330 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:34:58.330 "is_configured": true, 00:34:58.330 "data_offset": 256, 00:34:58.330 "data_size": 7936 00:34:58.330 }, 00:34:58.330 { 00:34:58.330 "name": "BaseBdev2", 00:34:58.330 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:58.330 "is_configured": true, 00:34:58.330 "data_offset": 256, 00:34:58.330 "data_size": 7936 00:34:58.330 } 00:34:58.330 ] 00:34:58.330 }' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:58.330 17:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.270 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:59.533 "name": "raid_bdev1", 00:34:59.533 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:34:59.533 "strip_size_kb": 0, 00:34:59.533 "state": "online", 00:34:59.533 "raid_level": "raid1", 00:34:59.533 "superblock": true, 00:34:59.533 "num_base_bdevs": 2, 00:34:59.533 "num_base_bdevs_discovered": 2, 00:34:59.533 "num_base_bdevs_operational": 2, 00:34:59.533 "process": { 00:34:59.533 "type": "rebuild", 00:34:59.533 "target": "spare", 00:34:59.533 "progress": { 00:34:59.533 "blocks": 5632, 00:34:59.533 "percent": 70 00:34:59.533 } 00:34:59.533 }, 00:34:59.533 "base_bdevs_list": [ 00:34:59.533 { 00:34:59.533 "name": "spare", 00:34:59.533 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:34:59.533 "is_configured": true, 00:34:59.533 "data_offset": 256, 00:34:59.533 "data_size": 7936 00:34:59.533 }, 00:34:59.533 { 00:34:59.533 "name": "BaseBdev2", 00:34:59.533 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:34:59.533 "is_configured": true, 00:34:59.533 "data_offset": 256, 00:34:59.533 "data_size": 7936 00:34:59.533 } 00:34:59.533 ] 00:34:59.533 }' 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:59.533 17:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:00.471 [2024-11-26 17:31:30.220576] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:00.471 [2024-11-26 17:31:30.220663] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:00.471 [2024-11-26 17:31:30.220833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.471 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:00.472 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.472 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:00.472 "name": "raid_bdev1", 00:35:00.472 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:00.472 "strip_size_kb": 0, 00:35:00.472 "state": "online", 00:35:00.472 "raid_level": "raid1", 00:35:00.472 "superblock": true, 00:35:00.472 "num_base_bdevs": 2, 00:35:00.472 "num_base_bdevs_discovered": 2, 00:35:00.472 "num_base_bdevs_operational": 2, 00:35:00.472 "base_bdevs_list": [ 00:35:00.472 { 00:35:00.472 "name": "spare", 00:35:00.472 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:00.472 "is_configured": true, 00:35:00.472 "data_offset": 256, 00:35:00.472 "data_size": 7936 00:35:00.472 }, 00:35:00.472 { 00:35:00.472 "name": "BaseBdev2", 00:35:00.472 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:00.472 "is_configured": true, 00:35:00.472 "data_offset": 256, 00:35:00.472 "data_size": 7936 00:35:00.472 } 00:35:00.472 ] 00:35:00.472 }' 00:35:00.472 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:00.731 "name": "raid_bdev1", 00:35:00.731 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:00.731 "strip_size_kb": 0, 00:35:00.731 "state": "online", 00:35:00.731 "raid_level": "raid1", 00:35:00.731 "superblock": true, 00:35:00.731 "num_base_bdevs": 2, 00:35:00.731 "num_base_bdevs_discovered": 2, 00:35:00.731 "num_base_bdevs_operational": 2, 00:35:00.731 "base_bdevs_list": [ 00:35:00.731 { 00:35:00.731 "name": "spare", 00:35:00.731 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:00.731 "is_configured": true, 00:35:00.731 "data_offset": 256, 00:35:00.731 "data_size": 7936 00:35:00.731 }, 00:35:00.731 { 00:35:00.731 "name": "BaseBdev2", 00:35:00.731 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:00.731 "is_configured": true, 00:35:00.731 "data_offset": 256, 00:35:00.731 "data_size": 7936 00:35:00.731 } 00:35:00.731 ] 00:35:00.731 }' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:00.731 "name": "raid_bdev1", 00:35:00.731 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:00.731 "strip_size_kb": 0, 00:35:00.731 "state": "online", 00:35:00.731 "raid_level": "raid1", 00:35:00.731 "superblock": true, 00:35:00.731 "num_base_bdevs": 2, 00:35:00.731 "num_base_bdevs_discovered": 2, 00:35:00.731 "num_base_bdevs_operational": 2, 00:35:00.731 "base_bdevs_list": [ 00:35:00.731 { 00:35:00.731 "name": "spare", 00:35:00.731 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:00.731 "is_configured": true, 00:35:00.731 "data_offset": 256, 00:35:00.731 "data_size": 7936 00:35:00.731 }, 00:35:00.731 { 00:35:00.731 "name": "BaseBdev2", 00:35:00.731 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:00.731 "is_configured": true, 00:35:00.731 "data_offset": 256, 00:35:00.731 "data_size": 7936 00:35:00.731 } 00:35:00.731 ] 00:35:00.731 }' 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:00.731 17:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 [2024-11-26 17:31:31.170675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:01.298 [2024-11-26 17:31:31.170927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:01.298 [2024-11-26 17:31:31.171076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:01.298 [2024-11-26 17:31:31.171158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:01.298 [2024-11-26 17:31:31.171173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 [2024-11-26 17:31:31.230641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:01.298 [2024-11-26 17:31:31.230723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:01.298 [2024-11-26 17:31:31.230753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:01.298 [2024-11-26 17:31:31.230767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:01.298 [2024-11-26 17:31:31.233261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:01.298 [2024-11-26 17:31:31.233313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:01.298 [2024-11-26 17:31:31.233390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:01.298 [2024-11-26 17:31:31.233455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:01.298 [2024-11-26 17:31:31.233616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:01.298 spare 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.298 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.298 [2024-11-26 17:31:31.333569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:35:01.298 [2024-11-26 17:31:31.333821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:01.298 [2024-11-26 17:31:31.334014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:01.298 [2024-11-26 17:31:31.334157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:35:01.299 [2024-11-26 17:31:31.334172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:35:01.299 [2024-11-26 17:31:31.334300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.299 "name": "raid_bdev1", 00:35:01.299 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:01.299 "strip_size_kb": 0, 00:35:01.299 "state": "online", 00:35:01.299 "raid_level": "raid1", 00:35:01.299 "superblock": true, 00:35:01.299 "num_base_bdevs": 2, 00:35:01.299 "num_base_bdevs_discovered": 2, 00:35:01.299 "num_base_bdevs_operational": 2, 00:35:01.299 "base_bdevs_list": [ 00:35:01.299 { 00:35:01.299 "name": "spare", 00:35:01.299 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:01.299 "is_configured": true, 00:35:01.299 "data_offset": 256, 00:35:01.299 "data_size": 7936 00:35:01.299 }, 00:35:01.299 { 00:35:01.299 "name": "BaseBdev2", 00:35:01.299 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:01.299 "is_configured": true, 00:35:01.299 "data_offset": 256, 00:35:01.299 "data_size": 7936 00:35:01.299 } 00:35:01.299 ] 00:35:01.299 }' 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.299 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:01.919 "name": "raid_bdev1", 00:35:01.919 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:01.919 "strip_size_kb": 0, 00:35:01.919 "state": "online", 00:35:01.919 "raid_level": "raid1", 00:35:01.919 "superblock": true, 00:35:01.919 "num_base_bdevs": 2, 00:35:01.919 "num_base_bdevs_discovered": 2, 00:35:01.919 "num_base_bdevs_operational": 2, 00:35:01.919 "base_bdevs_list": [ 00:35:01.919 { 00:35:01.919 "name": "spare", 00:35:01.919 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:01.919 "is_configured": true, 00:35:01.919 "data_offset": 256, 00:35:01.919 "data_size": 7936 00:35:01.919 }, 00:35:01.919 { 00:35:01.919 "name": "BaseBdev2", 00:35:01.919 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:01.919 "is_configured": true, 00:35:01.919 "data_offset": 256, 00:35:01.919 "data_size": 7936 00:35:01.919 } 00:35:01.919 ] 00:35:01.919 }' 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.919 [2024-11-26 17:31:31.969936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.919 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.920 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.920 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.920 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.920 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.920 17:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.920 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.920 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.920 "name": "raid_bdev1", 00:35:01.920 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:01.920 "strip_size_kb": 0, 00:35:01.920 "state": "online", 00:35:01.920 "raid_level": "raid1", 00:35:01.920 "superblock": true, 00:35:01.920 "num_base_bdevs": 2, 00:35:01.920 "num_base_bdevs_discovered": 1, 00:35:01.920 "num_base_bdevs_operational": 1, 00:35:01.920 "base_bdevs_list": [ 00:35:01.920 { 00:35:01.920 "name": null, 00:35:01.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.920 "is_configured": false, 00:35:01.920 "data_offset": 0, 00:35:01.920 "data_size": 7936 00:35:01.920 }, 00:35:01.920 { 00:35:01.920 "name": "BaseBdev2", 00:35:01.920 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:01.920 "is_configured": true, 00:35:01.920 "data_offset": 256, 00:35:01.920 "data_size": 7936 00:35:01.920 } 00:35:01.920 ] 00:35:01.920 }' 00:35:01.920 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.920 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.487 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:02.487 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.487 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.487 [2024-11-26 17:31:32.433763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:02.487 [2024-11-26 17:31:32.434268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:02.487 [2024-11-26 17:31:32.434444] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:02.487 [2024-11-26 17:31:32.434602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:02.487 [2024-11-26 17:31:32.452213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:02.487 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.487 17:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:35:02.487 [2024-11-26 17:31:32.454734] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:03.422 "name": "raid_bdev1", 00:35:03.422 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:03.422 "strip_size_kb": 0, 00:35:03.422 "state": "online", 00:35:03.422 "raid_level": "raid1", 00:35:03.422 "superblock": true, 00:35:03.422 "num_base_bdevs": 2, 00:35:03.422 "num_base_bdevs_discovered": 2, 00:35:03.422 "num_base_bdevs_operational": 2, 00:35:03.422 "process": { 00:35:03.422 "type": "rebuild", 00:35:03.422 "target": "spare", 00:35:03.422 "progress": { 00:35:03.422 "blocks": 2560, 00:35:03.422 "percent": 32 00:35:03.422 } 00:35:03.422 }, 00:35:03.422 "base_bdevs_list": [ 00:35:03.422 { 00:35:03.422 "name": "spare", 00:35:03.422 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:03.422 "is_configured": true, 00:35:03.422 "data_offset": 256, 00:35:03.422 "data_size": 7936 00:35:03.422 }, 00:35:03.422 { 00:35:03.422 "name": "BaseBdev2", 00:35:03.422 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:03.422 "is_configured": true, 00:35:03.422 "data_offset": 256, 00:35:03.422 "data_size": 7936 00:35:03.422 } 00:35:03.422 ] 00:35:03.422 }' 00:35:03.422 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:03.680 [2024-11-26 17:31:33.610502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:03.680 [2024-11-26 17:31:33.663169] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:03.680 [2024-11-26 17:31:33.663296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:03.680 [2024-11-26 17:31:33.663320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:03.680 [2024-11-26 17:31:33.663334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.680 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:03.681 "name": "raid_bdev1", 00:35:03.681 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:03.681 "strip_size_kb": 0, 00:35:03.681 "state": "online", 00:35:03.681 "raid_level": "raid1", 00:35:03.681 "superblock": true, 00:35:03.681 "num_base_bdevs": 2, 00:35:03.681 "num_base_bdevs_discovered": 1, 00:35:03.681 "num_base_bdevs_operational": 1, 00:35:03.681 "base_bdevs_list": [ 00:35:03.681 { 00:35:03.681 "name": null, 00:35:03.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.681 "is_configured": false, 00:35:03.681 "data_offset": 0, 00:35:03.681 "data_size": 7936 00:35:03.681 }, 00:35:03.681 { 00:35:03.681 "name": "BaseBdev2", 00:35:03.681 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:03.681 "is_configured": true, 00:35:03.681 "data_offset": 256, 00:35:03.681 "data_size": 7936 00:35:03.681 } 00:35:03.681 ] 00:35:03.681 }' 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:03.681 17:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.249 17:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:04.249 17:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.249 17:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.249 [2024-11-26 17:31:34.125718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:04.249 [2024-11-26 17:31:34.125852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.249 [2024-11-26 17:31:34.125890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:04.249 [2024-11-26 17:31:34.125909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.249 [2024-11-26 17:31:34.126166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.249 [2024-11-26 17:31:34.126188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:04.249 [2024-11-26 17:31:34.126267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:04.249 [2024-11-26 17:31:34.126288] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:04.249 [2024-11-26 17:31:34.126303] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:04.249 [2024-11-26 17:31:34.126333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:04.249 [2024-11-26 17:31:34.144755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:04.249 spare 00:35:04.249 17:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.249 17:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:35:04.249 [2024-11-26 17:31:34.147108] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:05.186 "name": "raid_bdev1", 00:35:05.186 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:05.186 "strip_size_kb": 0, 00:35:05.186 "state": "online", 00:35:05.186 "raid_level": "raid1", 00:35:05.186 "superblock": true, 00:35:05.186 "num_base_bdevs": 2, 00:35:05.186 "num_base_bdevs_discovered": 2, 00:35:05.186 "num_base_bdevs_operational": 2, 00:35:05.186 "process": { 00:35:05.186 "type": "rebuild", 00:35:05.186 "target": "spare", 00:35:05.186 "progress": { 00:35:05.186 "blocks": 2560, 00:35:05.186 "percent": 32 00:35:05.186 } 00:35:05.186 }, 00:35:05.186 "base_bdevs_list": [ 00:35:05.186 { 00:35:05.186 "name": "spare", 00:35:05.186 "uuid": "28dcbf5a-f2bb-5c2d-8319-1308302f61db", 00:35:05.186 "is_configured": true, 00:35:05.186 "data_offset": 256, 00:35:05.186 "data_size": 7936 00:35:05.186 }, 00:35:05.186 { 00:35:05.186 "name": "BaseBdev2", 00:35:05.186 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:05.186 "is_configured": true, 00:35:05.186 "data_offset": 256, 00:35:05.186 "data_size": 7936 00:35:05.186 } 00:35:05.186 ] 00:35:05.186 }' 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:05.186 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:05.445 [2024-11-26 17:31:35.302831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:05.445 [2024-11-26 17:31:35.355268] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:05.445 [2024-11-26 17:31:35.355363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.445 [2024-11-26 17:31:35.355387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:05.445 [2024-11-26 17:31:35.355397] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.445 "name": "raid_bdev1", 00:35:05.445 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:05.445 "strip_size_kb": 0, 00:35:05.445 "state": "online", 00:35:05.445 "raid_level": "raid1", 00:35:05.445 "superblock": true, 00:35:05.445 "num_base_bdevs": 2, 00:35:05.445 "num_base_bdevs_discovered": 1, 00:35:05.445 "num_base_bdevs_operational": 1, 00:35:05.445 "base_bdevs_list": [ 00:35:05.445 { 00:35:05.445 "name": null, 00:35:05.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.445 "is_configured": false, 00:35:05.445 "data_offset": 0, 00:35:05.445 "data_size": 7936 00:35:05.445 }, 00:35:05.445 { 00:35:05.445 "name": "BaseBdev2", 00:35:05.445 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:05.445 "is_configured": true, 00:35:05.445 "data_offset": 256, 00:35:05.445 "data_size": 7936 00:35:05.445 } 00:35:05.445 ] 00:35:05.445 }' 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.445 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:06.014 "name": "raid_bdev1", 00:35:06.014 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:06.014 "strip_size_kb": 0, 00:35:06.014 "state": "online", 00:35:06.014 "raid_level": "raid1", 00:35:06.014 "superblock": true, 00:35:06.014 "num_base_bdevs": 2, 00:35:06.014 "num_base_bdevs_discovered": 1, 00:35:06.014 "num_base_bdevs_operational": 1, 00:35:06.014 "base_bdevs_list": [ 00:35:06.014 { 00:35:06.014 "name": null, 00:35:06.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.014 "is_configured": false, 00:35:06.014 "data_offset": 0, 00:35:06.014 "data_size": 7936 00:35:06.014 }, 00:35:06.014 { 00:35:06.014 "name": "BaseBdev2", 00:35:06.014 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:06.014 "is_configured": true, 00:35:06.014 "data_offset": 256, 00:35:06.014 "data_size": 7936 00:35:06.014 } 00:35:06.014 ] 00:35:06.014 }' 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:06.014 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.015 [2024-11-26 17:31:35.977670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:06.015 [2024-11-26 17:31:35.977801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.015 [2024-11-26 17:31:35.977833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:06.015 [2024-11-26 17:31:35.977847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.015 [2024-11-26 17:31:35.978079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.015 [2024-11-26 17:31:35.978098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:06.015 [2024-11-26 17:31:35.978169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:06.015 [2024-11-26 17:31:35.978187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:06.015 [2024-11-26 17:31:35.978203] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:06.015 [2024-11-26 17:31:35.978219] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:35:06.015 BaseBdev1 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.015 17:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.955 17:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.955 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.955 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.955 "name": "raid_bdev1", 00:35:06.955 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:06.955 "strip_size_kb": 0, 00:35:06.955 "state": "online", 00:35:06.955 "raid_level": "raid1", 00:35:06.955 "superblock": true, 00:35:06.955 "num_base_bdevs": 2, 00:35:06.955 "num_base_bdevs_discovered": 1, 00:35:06.955 "num_base_bdevs_operational": 1, 00:35:06.955 "base_bdevs_list": [ 00:35:06.955 { 00:35:06.955 "name": null, 00:35:06.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.955 "is_configured": false, 00:35:06.955 "data_offset": 0, 00:35:06.955 "data_size": 7936 00:35:06.955 }, 00:35:06.955 { 00:35:06.955 "name": "BaseBdev2", 00:35:06.955 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:06.955 "is_configured": true, 00:35:06.955 "data_offset": 256, 00:35:06.955 "data_size": 7936 00:35:06.955 } 00:35:06.955 ] 00:35:06.955 }' 00:35:06.955 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.955 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:07.523 "name": "raid_bdev1", 00:35:07.523 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:07.523 "strip_size_kb": 0, 00:35:07.523 "state": "online", 00:35:07.523 "raid_level": "raid1", 00:35:07.523 "superblock": true, 00:35:07.523 "num_base_bdevs": 2, 00:35:07.523 "num_base_bdevs_discovered": 1, 00:35:07.523 "num_base_bdevs_operational": 1, 00:35:07.523 "base_bdevs_list": [ 00:35:07.523 { 00:35:07.523 "name": null, 00:35:07.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.523 "is_configured": false, 00:35:07.523 "data_offset": 0, 00:35:07.523 "data_size": 7936 00:35:07.523 }, 00:35:07.523 { 00:35:07.523 "name": "BaseBdev2", 00:35:07.523 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:07.523 "is_configured": true, 00:35:07.523 "data_offset": 256, 00:35:07.523 "data_size": 7936 00:35:07.523 } 00:35:07.523 ] 00:35:07.523 }' 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:07.523 [2024-11-26 17:31:37.555801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:07.523 [2024-11-26 17:31:37.556262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:07.523 [2024-11-26 17:31:37.556301] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:07.523 request: 00:35:07.523 { 00:35:07.523 "base_bdev": "BaseBdev1", 00:35:07.523 "raid_bdev": "raid_bdev1", 00:35:07.523 "method": "bdev_raid_add_base_bdev", 00:35:07.523 "req_id": 1 00:35:07.523 } 00:35:07.523 Got JSON-RPC error response 00:35:07.523 response: 00:35:07.523 { 00:35:07.523 "code": -22, 00:35:07.523 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:07.523 } 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:07.523 17:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.901 "name": "raid_bdev1", 00:35:08.901 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:08.901 "strip_size_kb": 0, 00:35:08.901 "state": "online", 00:35:08.901 "raid_level": "raid1", 00:35:08.901 "superblock": true, 00:35:08.901 "num_base_bdevs": 2, 00:35:08.901 "num_base_bdevs_discovered": 1, 00:35:08.901 "num_base_bdevs_operational": 1, 00:35:08.901 "base_bdevs_list": [ 00:35:08.901 { 00:35:08.901 "name": null, 00:35:08.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.901 "is_configured": false, 00:35:08.901 "data_offset": 0, 00:35:08.901 "data_size": 7936 00:35:08.901 }, 00:35:08.901 { 00:35:08.901 "name": "BaseBdev2", 00:35:08.901 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:08.901 "is_configured": true, 00:35:08.901 "data_offset": 256, 00:35:08.901 "data_size": 7936 00:35:08.901 } 00:35:08.901 ] 00:35:08.901 }' 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.901 17:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:09.168 "name": "raid_bdev1", 00:35:09.168 "uuid": "b9b00cc9-0717-4512-9113-51a2c8f87591", 00:35:09.168 "strip_size_kb": 0, 00:35:09.168 "state": "online", 00:35:09.168 "raid_level": "raid1", 00:35:09.168 "superblock": true, 00:35:09.168 "num_base_bdevs": 2, 00:35:09.168 "num_base_bdevs_discovered": 1, 00:35:09.168 "num_base_bdevs_operational": 1, 00:35:09.168 "base_bdevs_list": [ 00:35:09.168 { 00:35:09.168 "name": null, 00:35:09.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.168 "is_configured": false, 00:35:09.168 "data_offset": 0, 00:35:09.168 "data_size": 7936 00:35:09.168 }, 00:35:09.168 { 00:35:09.168 "name": "BaseBdev2", 00:35:09.168 "uuid": "ad0fcbc6-2dd2-5f5d-aa3f-430a4495c167", 00:35:09.168 "is_configured": true, 00:35:09.168 "data_offset": 256, 00:35:09.168 "data_size": 7936 00:35:09.168 } 00:35:09.168 ] 00:35:09.168 }' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89209 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89209 ']' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89209 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89209 00:35:09.168 killing process with pid 89209 00:35:09.168 Received shutdown signal, test time was about 60.000000 seconds 00:35:09.168 00:35:09.168 Latency(us) 00:35:09.168 [2024-11-26T17:31:39.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.168 [2024-11-26T17:31:39.282Z] =================================================================================================================== 00:35:09.168 [2024-11-26T17:31:39.282Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89209' 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89209 00:35:09.168 17:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89209 00:35:09.168 [2024-11-26 17:31:39.256881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:09.168 [2024-11-26 17:31:39.257045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:09.168 [2024-11-26 17:31:39.257104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:09.168 [2024-11-26 17:31:39.257132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:35:09.735 [2024-11-26 17:31:39.607809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:11.115 17:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:35:11.115 00:35:11.115 real 0m17.776s 00:35:11.115 user 0m23.066s 00:35:11.115 sys 0m1.936s 00:35:11.115 17:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.115 ************************************ 00:35:11.115 END TEST raid_rebuild_test_sb_md_interleaved 00:35:11.115 ************************************ 00:35:11.115 17:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:11.115 17:31:40 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:35:11.115 17:31:40 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:35:11.115 17:31:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89209 ']' 00:35:11.115 17:31:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89209 00:35:11.115 17:31:40 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:35:11.115 00:35:11.115 real 12m16.176s 00:35:11.115 user 16m18.855s 00:35:11.115 sys 2m15.810s 00:35:11.115 17:31:40 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.115 ************************************ 00:35:11.115 END TEST bdev_raid 00:35:11.115 ************************************ 00:35:11.115 17:31:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:11.115 17:31:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:11.115 17:31:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:11.115 17:31:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:11.115 17:31:41 -- common/autotest_common.sh@10 -- # set +x 00:35:11.115 ************************************ 00:35:11.115 START TEST spdkcli_raid 00:35:11.115 ************************************ 00:35:11.115 17:31:41 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:11.115 * Looking for test storage... 00:35:11.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:11.115 17:31:41 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:11.115 17:31:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:35:11.115 17:31:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:11.375 17:31:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.375 17:31:41 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.376 17:31:41 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:35:11.376 17:31:41 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.376 17:31:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.376 --rc genhtml_branch_coverage=1 00:35:11.376 --rc genhtml_function_coverage=1 00:35:11.376 --rc genhtml_legend=1 00:35:11.376 --rc geninfo_all_blocks=1 00:35:11.376 --rc geninfo_unexecuted_blocks=1 00:35:11.376 00:35:11.376 ' 00:35:11.376 17:31:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.376 --rc genhtml_branch_coverage=1 00:35:11.376 --rc genhtml_function_coverage=1 00:35:11.376 --rc genhtml_legend=1 00:35:11.376 --rc geninfo_all_blocks=1 00:35:11.376 --rc geninfo_unexecuted_blocks=1 00:35:11.376 00:35:11.376 ' 00:35:11.376 17:31:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.376 --rc genhtml_branch_coverage=1 00:35:11.376 --rc genhtml_function_coverage=1 00:35:11.376 --rc genhtml_legend=1 00:35:11.376 --rc geninfo_all_blocks=1 00:35:11.376 --rc geninfo_unexecuted_blocks=1 00:35:11.376 00:35:11.376 ' 00:35:11.376 17:31:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.376 --rc genhtml_branch_coverage=1 00:35:11.376 --rc genhtml_function_coverage=1 00:35:11.376 --rc genhtml_legend=1 00:35:11.376 --rc geninfo_all_blocks=1 00:35:11.376 --rc geninfo_unexecuted_blocks=1 00:35:11.376 00:35:11.376 ' 00:35:11.376 17:31:41 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:11.376 17:31:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:11.376 17:31:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:11.376 17:31:41 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:35:11.376 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:35:11.377 17:31:41 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89880 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:35:11.377 17:31:41 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89880 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89880 ']' 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.377 17:31:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:11.636 [2024-11-26 17:31:41.490378] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:11.636 [2024-11-26 17:31:41.490792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89880 ] 00:35:11.636 [2024-11-26 17:31:41.687458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:11.894 [2024-11-26 17:31:41.810544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.894 [2024-11-26 17:31:41.810597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:35:12.849 17:31:42 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:12.849 17:31:42 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.849 17:31:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:12.849 17:31:42 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:12.849 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:12.849 ' 00:35:14.757 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:35:14.757 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:35:14.757 17:31:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:35:14.757 17:31:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.757 17:31:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:14.757 17:31:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:35:14.757 17:31:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.757 17:31:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:14.757 17:31:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:35:14.757 ' 00:35:15.695 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:35:15.695 17:31:45 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:35:15.695 17:31:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.695 17:31:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.695 17:31:45 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:35:15.695 17:31:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.695 17:31:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.695 17:31:45 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:35:15.695 17:31:45 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:35:16.263 17:31:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:35:16.522 17:31:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:35:16.522 17:31:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:35:16.522 17:31:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.522 17:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:16.522 17:31:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:35:16.522 17:31:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.522 17:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:16.522 17:31:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:35:16.522 ' 00:35:17.458 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:35:17.458 17:31:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:35:17.458 17:31:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.458 17:31:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:17.717 17:31:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:35:17.717 17:31:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.717 17:31:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:17.717 17:31:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:35:17.717 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:35:17.717 ' 00:35:19.096 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:35:19.096 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:35:19.096 17:31:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:35:19.096 17:31:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.096 17:31:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:19.096 17:31:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89880 00:35:19.096 17:31:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89880 ']' 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89880 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89880 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.354 killing process with pid 89880 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89880' 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89880 00:35:19.354 17:31:49 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89880 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89880 ']' 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89880 00:35:21.886 17:31:51 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89880 ']' 00:35:21.886 17:31:51 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89880 00:35:21.886 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89880) - No such process 00:35:21.886 Process with pid 89880 is not found 00:35:21.886 17:31:51 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89880 is not found' 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:21.886 17:31:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:21.886 00:35:21.886 real 0m10.757s 00:35:21.886 user 0m22.045s 00:35:21.886 sys 0m1.355s 00:35:21.886 17:31:51 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.886 17:31:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:21.886 ************************************ 00:35:21.886 END TEST spdkcli_raid 00:35:21.886 ************************************ 00:35:21.886 17:31:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:21.886 17:31:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.886 17:31:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.886 17:31:51 -- common/autotest_common.sh@10 -- # set +x 00:35:21.886 ************************************ 00:35:21.886 START TEST blockdev_raid5f 00:35:21.886 ************************************ 00:35:21.886 17:31:51 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:22.145 * Looking for test storage... 00:35:22.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:22.145 17:31:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:22.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.145 --rc genhtml_branch_coverage=1 00:35:22.145 --rc genhtml_function_coverage=1 00:35:22.145 --rc genhtml_legend=1 00:35:22.145 --rc geninfo_all_blocks=1 00:35:22.145 --rc geninfo_unexecuted_blocks=1 00:35:22.145 00:35:22.145 ' 00:35:22.145 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:22.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.145 --rc genhtml_branch_coverage=1 00:35:22.145 --rc genhtml_function_coverage=1 00:35:22.145 --rc genhtml_legend=1 00:35:22.145 --rc geninfo_all_blocks=1 00:35:22.146 --rc geninfo_unexecuted_blocks=1 00:35:22.146 00:35:22.146 ' 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:22.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.146 --rc genhtml_branch_coverage=1 00:35:22.146 --rc genhtml_function_coverage=1 00:35:22.146 --rc genhtml_legend=1 00:35:22.146 --rc geninfo_all_blocks=1 00:35:22.146 --rc geninfo_unexecuted_blocks=1 00:35:22.146 00:35:22.146 ' 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:22.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.146 --rc genhtml_branch_coverage=1 00:35:22.146 --rc genhtml_function_coverage=1 00:35:22.146 --rc genhtml_legend=1 00:35:22.146 --rc geninfo_all_blocks=1 00:35:22.146 --rc geninfo_unexecuted_blocks=1 00:35:22.146 00:35:22.146 ' 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:35:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90167 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90167 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90167 ']' 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.146 17:31:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:22.146 17:31:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:22.405 [2024-11-26 17:31:52.259185] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:22.405 [2024-11-26 17:31:52.260250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90167 ] 00:35:22.405 [2024-11-26 17:31:52.446938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.663 [2024-11-26 17:31:52.565570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:35:23.598 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:35:23.598 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:35:23.598 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.598 Malloc0 00:35:23.598 Malloc1 00:35:23.598 Malloc2 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.598 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.598 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.598 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "171438a4-4030-4f91-9ca3-95f8c052d308"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "171438a4-4030-4f91-9ca3-95f8c052d308",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "171438a4-4030-4f91-9ca3-95f8c052d308",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cfa3f4ad-9a09-4747-b02e-d7b0a519abc6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bd2f05e5-992f-429e-9ceb-a454ab579d4f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0765ca34-f807-45e5-921f-c20941b83c2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:35:23.858 17:31:53 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90167 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90167 ']' 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90167 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90167 00:35:23.858 killing process with pid 90167 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90167' 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90167 00:35:23.858 17:31:53 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90167 00:35:27.177 17:31:56 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:27.177 17:31:56 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:27.177 17:31:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:27.177 17:31:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.177 17:31:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:27.177 ************************************ 00:35:27.177 START TEST bdev_hello_world 00:35:27.177 ************************************ 00:35:27.177 17:31:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:27.177 [2024-11-26 17:31:56.936440] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:27.177 [2024-11-26 17:31:56.936590] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90234 ] 00:35:27.177 [2024-11-26 17:31:57.126717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.177 [2024-11-26 17:31:57.249550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.747 [2024-11-26 17:31:57.834614] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:27.747 [2024-11-26 17:31:57.834929] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:35:27.747 [2024-11-26 17:31:57.834961] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:27.747 [2024-11-26 17:31:57.835485] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:27.747 [2024-11-26 17:31:57.835648] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:27.747 [2024-11-26 17:31:57.835668] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:27.747 [2024-11-26 17:31:57.835725] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:27.747 00:35:27.747 [2024-11-26 17:31:57.835746] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:29.654 00:35:29.654 real 0m2.517s 00:35:29.654 user 0m2.041s 00:35:29.654 sys 0m0.352s 00:35:29.654 17:31:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.654 17:31:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:29.654 ************************************ 00:35:29.654 END TEST bdev_hello_world 00:35:29.654 ************************************ 00:35:29.654 17:31:59 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:35:29.654 17:31:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:29.654 17:31:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.654 17:31:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:29.654 ************************************ 00:35:29.654 START TEST bdev_bounds 00:35:29.654 ************************************ 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90286 00:35:29.654 Process bdevio pid: 90286 00:35:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90286' 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90286 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90286 ']' 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.654 17:31:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:29.654 [2024-11-26 17:31:59.528176] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:29.654 [2024-11-26 17:31:59.528327] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90286 ] 00:35:29.654 [2024-11-26 17:31:59.712620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:29.914 [2024-11-26 17:31:59.837606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.914 [2024-11-26 17:31:59.837734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.914 [2024-11-26 17:31:59.837794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:30.532 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.532 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:35:30.532 17:32:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:30.532 I/O targets: 00:35:30.532 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:35:30.532 00:35:30.532 00:35:30.532 CUnit - A unit testing framework for C - Version 2.1-3 00:35:30.532 http://cunit.sourceforge.net/ 00:35:30.532 00:35:30.532 00:35:30.532 Suite: bdevio tests on: raid5f 00:35:30.532 Test: blockdev write read block ...passed 00:35:30.532 Test: blockdev write zeroes read block ...passed 00:35:30.532 Test: blockdev write zeroes read no split ...passed 00:35:30.791 Test: blockdev write zeroes read split ...passed 00:35:30.791 Test: blockdev write zeroes read split partial ...passed 00:35:30.791 Test: blockdev reset ...passed 00:35:30.791 Test: blockdev write read 8 blocks ...passed 00:35:30.791 Test: blockdev write read size > 128k ...passed 00:35:30.791 Test: blockdev write read invalid size ...passed 00:35:30.791 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:30.791 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:30.791 Test: blockdev write read max offset ...passed 00:35:30.791 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:30.791 Test: blockdev writev readv 8 blocks ...passed 00:35:30.791 Test: blockdev writev readv 30 x 1block ...passed 00:35:30.791 Test: blockdev writev readv block ...passed 00:35:30.791 Test: blockdev writev readv size > 128k ...passed 00:35:30.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:30.791 Test: blockdev comparev and writev ...passed 00:35:30.791 Test: blockdev nvme passthru rw ...passed 00:35:30.791 Test: blockdev nvme passthru vendor specific ...passed 00:35:30.791 Test: blockdev nvme admin passthru ...passed 00:35:30.791 Test: blockdev copy ...passed 00:35:30.791 00:35:30.791 Run Summary: Type Total Ran Passed Failed Inactive 00:35:30.791 suites 1 1 n/a 0 0 00:35:30.791 tests 23 23 23 0 0 00:35:30.791 asserts 130 130 130 0 n/a 00:35:30.791 00:35:30.791 Elapsed time = 0.645 seconds 00:35:30.791 0 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90286 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90286 ']' 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90286 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.791 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90286 00:35:31.049 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.049 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.049 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90286' 00:35:31.049 killing process with pid 90286 00:35:31.049 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90286 00:35:31.049 17:32:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90286 00:35:32.423 17:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:32.423 ************************************ 00:35:32.423 END TEST bdev_bounds 00:35:32.423 ************************************ 00:35:32.423 00:35:32.423 real 0m3.024s 00:35:32.423 user 0m7.489s 00:35:32.423 sys 0m0.492s 00:35:32.423 17:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.424 17:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:32.424 17:32:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:32.424 17:32:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:32.424 17:32:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.424 17:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:32.424 ************************************ 00:35:32.424 START TEST bdev_nbd 00:35:32.424 ************************************ 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:35:32.424 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90347 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90347 /var/tmp/spdk-nbd.sock 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90347 ']' 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:32.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.682 17:32:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:32.682 [2024-11-26 17:32:02.634891] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:32.682 [2024-11-26 17:32:02.635037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.939 [2024-11-26 17:32:02.819319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.939 [2024-11-26 17:32:02.940677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:33.506 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:33.766 1+0 records in 00:35:33.766 1+0 records out 00:35:33.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274155 s, 14.9 MB/s 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:33.766 17:32:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:34.025 { 00:35:34.025 "nbd_device": "/dev/nbd0", 00:35:34.025 "bdev_name": "raid5f" 00:35:34.025 } 00:35:34.025 ]' 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:34.025 { 00:35:34.025 "nbd_device": "/dev/nbd0", 00:35:34.025 "bdev_name": "raid5f" 00:35:34.025 } 00:35:34.025 ]' 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:34.025 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:34.026 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:34.285 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:34.285 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:34.285 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:34.286 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:34.544 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:35:34.802 /dev/nbd0 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:34.802 1+0 records in 00:35:34.802 1+0 records out 00:35:34.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321418 s, 12.7 MB/s 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:34.802 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:35.061 17:32:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:35.061 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:35.061 { 00:35:35.061 "nbd_device": "/dev/nbd0", 00:35:35.061 "bdev_name": "raid5f" 00:35:35.061 } 00:35:35.061 ]' 00:35:35.061 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:35.061 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:35.061 { 00:35:35.061 "nbd_device": "/dev/nbd0", 00:35:35.061 "bdev_name": "raid5f" 00:35:35.061 } 00:35:35.061 ]' 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:35.320 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:35.321 256+0 records in 00:35:35.321 256+0 records out 00:35:35.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110627 s, 94.8 MB/s 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:35.321 256+0 records in 00:35:35.321 256+0 records out 00:35:35.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0374721 s, 28.0 MB/s 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:35.321 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:35.580 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:35:35.839 17:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:36.098 malloc_lvol_verify 00:35:36.098 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:36.356 03562b22-7391-4e53-9366-f1a4973fe45c 00:35:36.356 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:36.615 ee2dfbcb-50b2-4860-a226-c982bc6becea 00:35:36.616 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:36.874 /dev/nbd0 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:35:36.874 mke2fs 1.47.0 (5-Feb-2023) 00:35:36.874 Discarding device blocks: 0/4096 done 00:35:36.874 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:36.874 00:35:36.874 Allocating group tables: 0/1 done 00:35:36.874 Writing inode tables: 0/1 done 00:35:36.874 Creating journal (1024 blocks): done 00:35:36.874 Writing superblocks and filesystem accounting information: 0/1 done 00:35:36.874 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:36.874 17:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90347 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90347 ']' 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90347 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90347 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90347' 00:35:37.133 killing process with pid 90347 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90347 00:35:37.133 17:32:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90347 00:35:39.072 17:32:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:35:39.072 00:35:39.072 real 0m6.187s 00:35:39.072 user 0m8.262s 00:35:39.072 sys 0m1.602s 00:35:39.072 17:32:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.072 17:32:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:39.072 ************************************ 00:35:39.072 END TEST bdev_nbd 00:35:39.072 ************************************ 00:35:39.072 17:32:08 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:35:39.072 17:32:08 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:35:39.072 17:32:08 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:35:39.072 17:32:08 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:35:39.072 17:32:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:39.072 17:32:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.072 17:32:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:39.072 ************************************ 00:35:39.072 START TEST bdev_fio 00:35:39.072 ************************************ 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:35:39.072 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:39.072 ************************************ 00:35:39.072 START TEST bdev_fio_rw_verify 00:35:39.072 ************************************ 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:39.072 17:32:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:39.072 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:39.072 fio-3.35 00:35:39.072 Starting 1 thread 00:35:51.278 00:35:51.278 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90556: Tue Nov 26 17:32:20 2024 00:35:51.278 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(391MiB/10001msec) 00:35:51.278 slat (usec): min=19, max=233, avg=23.80, stdev= 4.13 00:35:51.278 clat (usec): min=12, max=845, avg=160.28, stdev=58.95 00:35:51.278 lat (usec): min=33, max=888, avg=184.09, stdev=59.87 00:35:51.278 clat percentiles (usec): 00:35:51.278 | 50.000th=[ 157], 99.000th=[ 285], 99.900th=[ 355], 99.990th=[ 457], 00:35:51.278 | 99.999th=[ 791] 00:35:51.278 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(404MiB/9868msec); 0 zone resets 00:35:51.278 slat (usec): min=8, max=246, avg=20.07, stdev= 4.88 00:35:51.278 clat (usec): min=68, max=704, avg=367.34, stdev=55.37 00:35:51.278 lat (usec): min=85, max=757, avg=387.41, stdev=56.80 00:35:51.278 clat percentiles (usec): 00:35:51.278 | 50.000th=[ 367], 99.000th=[ 506], 99.900th=[ 594], 99.990th=[ 660], 00:35:51.278 | 99.999th=[ 685] 00:35:51.278 bw ( KiB/s): min=35808, max=47552, per=99.03%, avg=41545.26, stdev=3647.59, samples=19 00:35:51.278 iops : min= 8952, max=11888, avg=10386.32, stdev=911.90, samples=19 00:35:51.278 lat (usec) : 20=0.01%, 50=0.01%, 100=9.31%, 250=36.94%, 500=53.14% 00:35:51.278 lat (usec) : 750=0.60%, 1000=0.01% 00:35:51.278 cpu : usr=98.82%, sys=0.38%, ctx=22, majf=0, minf=8442 00:35:51.278 IO depths : 1=7.7%, 2=20.1%, 4=54.9%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.278 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.278 issued rwts: total=100041,103498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.278 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:51.278 00:35:51.278 Run status group 0 (all jobs): 00:35:51.278 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=391MiB (410MB), run=10001-10001msec 00:35:51.278 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=404MiB (424MB), run=9868-9868msec 00:35:51.845 ----------------------------------------------------- 00:35:51.845 Suppressions used: 00:35:51.845 count bytes template 00:35:51.845 1 7 /usr/src/fio/parse.c 00:35:51.845 338 32448 /usr/src/fio/iolog.c 00:35:51.845 1 8 libtcmalloc_minimal.so 00:35:51.845 1 904 libcrypto.so 00:35:51.845 ----------------------------------------------------- 00:35:51.845 00:35:51.845 00:35:51.845 real 0m13.062s 00:35:51.845 user 0m13.556s 00:35:51.845 sys 0m0.888s 00:35:51.845 17:32:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.845 ************************************ 00:35:51.845 END TEST bdev_fio_rw_verify 00:35:51.845 ************************************ 00:35:51.845 17:32:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:35:52.104 17:32:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:35:52.104 17:32:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:52.104 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:35:52.104 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:52.104 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:35:52.104 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:35:52.104 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "171438a4-4030-4f91-9ca3-95f8c052d308"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "171438a4-4030-4f91-9ca3-95f8c052d308",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "171438a4-4030-4f91-9ca3-95f8c052d308",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cfa3f4ad-9a09-4747-b02e-d7b0a519abc6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bd2f05e5-992f-429e-9ceb-a454ab579d4f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0765ca34-f807-45e5-921f-c20941b83c2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:52.105 /home/vagrant/spdk_repo/spdk 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:35:52.105 00:35:52.105 real 0m13.301s 00:35:52.105 user 0m13.658s 00:35:52.105 sys 0m0.993s 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.105 ************************************ 00:35:52.105 END TEST bdev_fio 00:35:52.105 ************************************ 00:35:52.105 17:32:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:52.105 17:32:22 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:52.105 17:32:22 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:52.105 17:32:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:35:52.105 17:32:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.105 17:32:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:52.105 ************************************ 00:35:52.105 START TEST bdev_verify 00:35:52.105 ************************************ 00:35:52.105 17:32:22 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:52.364 [2024-11-26 17:32:22.233389] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:35:52.364 [2024-11-26 17:32:22.233550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90721 ] 00:35:52.364 [2024-11-26 17:32:22.420881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:52.623 [2024-11-26 17:32:22.543763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.623 [2024-11-26 17:32:22.543804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.192 Running I/O for 5 seconds... 00:35:55.508 9254.00 IOPS, 36.15 MiB/s [2024-11-26T17:32:26.557Z] 9090.50 IOPS, 35.51 MiB/s [2024-11-26T17:32:27.492Z] 8812.67 IOPS, 34.42 MiB/s [2024-11-26T17:32:28.425Z] 8641.50 IOPS, 33.76 MiB/s [2024-11-26T17:32:28.425Z] 8474.20 IOPS, 33.10 MiB/s 00:35:58.311 Latency(us) 00:35:58.311 [2024-11-26T17:32:28.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.311 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:58.311 Verification LBA range: start 0x0 length 0x2000 00:35:58.311 raid5f : 5.02 4231.70 16.53 0.00 0.00 45612.15 218.78 38321.45 00:35:58.311 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:58.311 Verification LBA range: start 0x2000 length 0x2000 00:35:58.311 raid5f : 5.02 4254.99 16.62 0.00 0.00 45398.37 888.29 40427.03 00:35:58.311 [2024-11-26T17:32:28.425Z] =================================================================================================================== 00:35:58.311 [2024-11-26T17:32:28.425Z] Total : 8486.69 33.15 0.00 0.00 45504.97 218.78 40427.03 00:36:00.210 00:36:00.210 real 0m7.701s 00:36:00.210 user 0m14.106s 00:36:00.210 sys 0m0.362s 00:36:00.210 17:32:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.210 17:32:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:00.210 ************************************ 00:36:00.210 END TEST bdev_verify 00:36:00.210 ************************************ 00:36:00.210 17:32:29 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:00.210 17:32:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:36:00.210 17:32:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.210 17:32:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:36:00.210 ************************************ 00:36:00.210 START TEST bdev_verify_big_io 00:36:00.210 ************************************ 00:36:00.210 17:32:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:00.210 [2024-11-26 17:32:29.985658] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:36:00.210 [2024-11-26 17:32:29.985817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90814 ] 00:36:00.210 [2024-11-26 17:32:30.162127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:00.210 [2024-11-26 17:32:30.292665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.210 [2024-11-26 17:32:30.292709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.777 Running I/O for 5 seconds... 00:36:03.092 506.00 IOPS, 31.62 MiB/s [2024-11-26T17:32:34.143Z] 601.50 IOPS, 37.59 MiB/s [2024-11-26T17:32:35.080Z] 675.67 IOPS, 42.23 MiB/s [2024-11-26T17:32:36.015Z] 728.75 IOPS, 45.55 MiB/s [2024-11-26T17:32:36.583Z] 685.40 IOPS, 42.84 MiB/s 00:36:06.469 Latency(us) 00:36:06.469 [2024-11-26T17:32:36.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.469 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:06.469 Verification LBA range: start 0x0 length 0x200 00:36:06.469 raid5f : 5.42 328.32 20.52 0.00 0.00 9780281.64 217.14 481755.40 00:36:06.469 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:06.469 Verification LBA range: start 0x200 length 0x200 00:36:06.469 raid5f : 5.39 353.21 22.08 0.00 0.00 8975478.83 355.32 444697.29 00:36:06.469 [2024-11-26T17:32:36.583Z] =================================================================================================================== 00:36:06.469 [2024-11-26T17:32:36.583Z] Total : 681.54 42.60 0.00 0.00 9364004.32 217.14 481755.40 00:36:07.849 00:36:07.849 real 0m7.952s 00:36:07.849 user 0m14.656s 00:36:07.849 sys 0m0.358s 00:36:07.849 17:32:37 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.849 ************************************ 00:36:07.849 END TEST bdev_verify_big_io 00:36:07.849 17:32:37 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:07.849 ************************************ 00:36:07.849 17:32:37 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:07.849 17:32:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:07.849 17:32:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:07.849 17:32:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:36:07.849 ************************************ 00:36:07.849 START TEST bdev_write_zeroes 00:36:07.849 ************************************ 00:36:07.849 17:32:37 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:08.116 [2024-11-26 17:32:38.022486] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:36:08.117 [2024-11-26 17:32:38.022655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90920 ] 00:36:08.117 [2024-11-26 17:32:38.211635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.393 [2024-11-26 17:32:38.330634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.990 Running I/O for 1 seconds... 00:36:09.927 26271.00 IOPS, 102.62 MiB/s 00:36:09.927 Latency(us) 00:36:09.927 [2024-11-26T17:32:40.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.927 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:09.927 raid5f : 1.01 26235.58 102.48 0.00 0.00 4862.73 1394.94 6658.88 00:36:09.927 [2024-11-26T17:32:40.042Z] =================================================================================================================== 00:36:09.928 [2024-11-26T17:32:40.042Z] Total : 26235.58 102.48 0.00 0.00 4862.73 1394.94 6658.88 00:36:11.831 00:36:11.831 real 0m3.524s 00:36:11.831 user 0m3.055s 00:36:11.831 sys 0m0.337s 00:36:11.831 ************************************ 00:36:11.831 END TEST bdev_write_zeroes 00:36:11.831 ************************************ 00:36:11.831 17:32:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.831 17:32:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:11.831 17:32:41 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:11.831 17:32:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:11.831 17:32:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.831 17:32:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:36:11.831 ************************************ 00:36:11.831 START TEST bdev_json_nonenclosed 00:36:11.831 ************************************ 00:36:11.831 17:32:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:11.831 [2024-11-26 17:32:41.609362] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:36:11.831 [2024-11-26 17:32:41.609493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90979 ] 00:36:11.831 [2024-11-26 17:32:41.792557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.831 [2024-11-26 17:32:41.902944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.831 [2024-11-26 17:32:41.903045] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:11.831 [2024-11-26 17:32:41.903078] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:11.831 [2024-11-26 17:32:41.903091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:12.090 00:36:12.090 real 0m0.660s 00:36:12.090 user 0m0.405s 00:36:12.090 sys 0m0.150s 00:36:12.090 ************************************ 00:36:12.090 END TEST bdev_json_nonenclosed 00:36:12.090 ************************************ 00:36:12.090 17:32:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.090 17:32:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:12.349 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:12.349 17:32:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:12.349 17:32:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.349 17:32:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:36:12.349 ************************************ 00:36:12.349 START TEST bdev_json_nonarray 00:36:12.349 ************************************ 00:36:12.349 17:32:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:12.349 [2024-11-26 17:32:42.343877] Starting SPDK v25.01-pre git sha1 ff173863b / DPDK 24.03.0 initialization... 00:36:12.349 [2024-11-26 17:32:42.344003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91004 ] 00:36:12.608 [2024-11-26 17:32:42.528633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.608 [2024-11-26 17:32:42.647306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.608 [2024-11-26 17:32:42.647427] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:12.608 [2024-11-26 17:32:42.647453] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:12.608 [2024-11-26 17:32:42.647476] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:12.867 00:36:12.867 real 0m0.660s 00:36:12.867 user 0m0.412s 00:36:12.867 sys 0m0.143s 00:36:12.867 17:32:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.867 ************************************ 00:36:12.867 END TEST bdev_json_nonarray 00:36:12.867 ************************************ 00:36:12.867 17:32:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:36:12.867 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:36:13.126 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:13.126 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:36:13.126 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:36:13.126 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:36:13.126 17:32:42 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:36:13.126 00:36:13.126 real 0m51.100s 00:36:13.126 user 1m8.977s 00:36:13.126 sys 0m6.072s 00:36:13.126 17:32:42 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.126 ************************************ 00:36:13.126 END TEST blockdev_raid5f 00:36:13.126 ************************************ 00:36:13.126 17:32:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 17:32:43 -- spdk/autotest.sh@194 -- # uname -s 00:36:13.126 17:32:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:36:13.126 17:32:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.126 17:32:43 -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 17:32:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:13.126 17:32:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:13.126 17:32:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:13.126 17:32:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:13.126 17:32:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.126 17:32:43 -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 17:32:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:13.126 17:32:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:13.126 17:32:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:13.126 17:32:43 -- common/autotest_common.sh@10 -- # set +x 00:36:15.653 INFO: APP EXITING 00:36:15.653 INFO: killing all VMs 00:36:15.653 INFO: killing vhost app 00:36:15.653 INFO: EXIT DONE 00:36:15.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:15.910 Waiting for block devices as requested 00:36:16.226 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:16.226 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:17.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:17.172 Cleaning 00:36:17.172 Removing: /var/run/dpdk/spdk0/config 00:36:17.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:17.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:17.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:17.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:17.172 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:17.172 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:17.172 Removing: /dev/shm/spdk_tgt_trace.pid56829 00:36:17.172 Removing: /var/run/dpdk/spdk0 00:36:17.172 Removing: /var/run/dpdk/spdk_pid56577 00:36:17.172 Removing: /var/run/dpdk/spdk_pid56829 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57069 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57184 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57240 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57379 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57397 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57618 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57736 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57854 00:36:17.172 Removing: /var/run/dpdk/spdk_pid57982 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58102 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58140 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58178 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58254 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58382 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58855 00:36:17.172 Removing: /var/run/dpdk/spdk_pid58938 00:36:17.172 Removing: /var/run/dpdk/spdk_pid59024 00:36:17.172 Removing: /var/run/dpdk/spdk_pid59040 00:36:17.172 Removing: /var/run/dpdk/spdk_pid59210 00:36:17.172 Removing: /var/run/dpdk/spdk_pid59226 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59391 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59412 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59487 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59511 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59575 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59604 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59811 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59851 00:36:17.430 Removing: /var/run/dpdk/spdk_pid59940 00:36:17.430 Removing: /var/run/dpdk/spdk_pid61316 00:36:17.430 Removing: /var/run/dpdk/spdk_pid61528 00:36:17.430 Removing: /var/run/dpdk/spdk_pid61669 00:36:17.430 Removing: /var/run/dpdk/spdk_pid62311 00:36:17.430 Removing: /var/run/dpdk/spdk_pid62523 00:36:17.430 Removing: /var/run/dpdk/spdk_pid62663 00:36:17.430 Removing: /var/run/dpdk/spdk_pid63312 00:36:17.430 Removing: /var/run/dpdk/spdk_pid63636 00:36:17.430 Removing: /var/run/dpdk/spdk_pid63782 00:36:17.430 Removing: /var/run/dpdk/spdk_pid65167 00:36:17.430 Removing: /var/run/dpdk/spdk_pid65420 00:36:17.430 Removing: /var/run/dpdk/spdk_pid65560 00:36:17.430 Removing: /var/run/dpdk/spdk_pid66951 00:36:17.430 Removing: /var/run/dpdk/spdk_pid67204 00:36:17.430 Removing: /var/run/dpdk/spdk_pid67344 00:36:17.430 Removing: /var/run/dpdk/spdk_pid68737 00:36:17.430 Removing: /var/run/dpdk/spdk_pid69190 00:36:17.430 Removing: /var/run/dpdk/spdk_pid69336 00:36:17.430 Removing: /var/run/dpdk/spdk_pid70834 00:36:17.430 Removing: /var/run/dpdk/spdk_pid71098 00:36:17.430 Removing: /var/run/dpdk/spdk_pid71245 00:36:17.430 Removing: /var/run/dpdk/spdk_pid72729 00:36:17.430 Removing: /var/run/dpdk/spdk_pid72995 00:36:17.430 Removing: /var/run/dpdk/spdk_pid73141 00:36:17.430 Removing: /var/run/dpdk/spdk_pid74627 00:36:17.430 Removing: /var/run/dpdk/spdk_pid75115 00:36:17.430 Removing: /var/run/dpdk/spdk_pid75260 00:36:17.430 Removing: /var/run/dpdk/spdk_pid75405 00:36:17.430 Removing: /var/run/dpdk/spdk_pid75840 00:36:17.430 Removing: /var/run/dpdk/spdk_pid76586 00:36:17.430 Removing: /var/run/dpdk/spdk_pid76982 00:36:17.430 Removing: /var/run/dpdk/spdk_pid77673 00:36:17.430 Removing: /var/run/dpdk/spdk_pid78136 00:36:17.430 Removing: /var/run/dpdk/spdk_pid78907 00:36:17.431 Removing: /var/run/dpdk/spdk_pid79316 00:36:17.431 Removing: /var/run/dpdk/spdk_pid81290 00:36:17.431 Removing: /var/run/dpdk/spdk_pid81734 00:36:17.431 Removing: /var/run/dpdk/spdk_pid82183 00:36:17.431 Removing: /var/run/dpdk/spdk_pid84275 00:36:17.431 Removing: /var/run/dpdk/spdk_pid84767 00:36:17.431 Removing: /var/run/dpdk/spdk_pid85289 00:36:17.431 Removing: /var/run/dpdk/spdk_pid86347 00:36:17.431 Removing: /var/run/dpdk/spdk_pid86679 00:36:17.431 Removing: /var/run/dpdk/spdk_pid87612 00:36:17.431 Removing: /var/run/dpdk/spdk_pid87939 00:36:17.431 Removing: /var/run/dpdk/spdk_pid88885 00:36:17.689 Removing: /var/run/dpdk/spdk_pid89209 00:36:17.689 Removing: /var/run/dpdk/spdk_pid89880 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90167 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90234 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90286 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90541 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90721 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90814 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90920 00:36:17.689 Removing: /var/run/dpdk/spdk_pid90979 00:36:17.689 Removing: /var/run/dpdk/spdk_pid91004 00:36:17.689 Clean 00:36:17.689 17:32:47 -- common/autotest_common.sh@1453 -- # return 0 00:36:17.689 17:32:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:17.689 17:32:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.689 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:36:17.689 17:32:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:17.689 17:32:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.689 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:36:17.689 17:32:47 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:17.689 17:32:47 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:17.689 17:32:47 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:17.689 17:32:47 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:17.689 17:32:47 -- spdk/autotest.sh@398 -- # hostname 00:36:17.689 17:32:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:17.948 geninfo: WARNING: invalid characters removed from testname! 00:36:44.496 17:33:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:45.916 17:33:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:48.450 17:33:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:50.983 17:33:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:52.886 17:33:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:55.419 17:33:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:57.409 17:33:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:57.409 17:33:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:57.409 17:33:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:57.409 17:33:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:57.409 17:33:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:57.409 17:33:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:57.409 + [[ -n 5214 ]] 00:36:57.409 + sudo kill 5214 00:36:57.419 [Pipeline] } 00:36:57.435 [Pipeline] // timeout 00:36:57.441 [Pipeline] } 00:36:57.455 [Pipeline] // stage 00:36:57.461 [Pipeline] } 00:36:57.475 [Pipeline] // catchError 00:36:57.485 [Pipeline] stage 00:36:57.487 [Pipeline] { (Stop VM) 00:36:57.499 [Pipeline] sh 00:36:57.780 + vagrant halt 00:37:01.070 ==> default: Halting domain... 00:37:07.676 [Pipeline] sh 00:37:07.960 + vagrant destroy -f 00:37:11.248 ==> default: Removing domain... 00:37:11.261 [Pipeline] sh 00:37:11.545 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:37:11.554 [Pipeline] } 00:37:11.569 [Pipeline] // stage 00:37:11.575 [Pipeline] } 00:37:11.589 [Pipeline] // dir 00:37:11.595 [Pipeline] } 00:37:11.609 [Pipeline] // wrap 00:37:11.616 [Pipeline] } 00:37:11.629 [Pipeline] // catchError 00:37:11.638 [Pipeline] stage 00:37:11.641 [Pipeline] { (Epilogue) 00:37:11.654 [Pipeline] sh 00:37:11.941 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:17.243 [Pipeline] catchError 00:37:17.245 [Pipeline] { 00:37:17.260 [Pipeline] sh 00:37:17.592 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:18.528 Artifacts sizes are good 00:37:18.537 [Pipeline] } 00:37:18.552 [Pipeline] // catchError 00:37:18.565 [Pipeline] archiveArtifacts 00:37:18.572 Archiving artifacts 00:37:18.684 [Pipeline] cleanWs 00:37:18.696 [WS-CLEANUP] Deleting project workspace... 00:37:18.696 [WS-CLEANUP] Deferred wipeout is used... 00:37:18.704 [WS-CLEANUP] done 00:37:18.706 [Pipeline] } 00:37:18.721 [Pipeline] // stage 00:37:18.726 [Pipeline] } 00:37:18.739 [Pipeline] // node 00:37:18.744 [Pipeline] End of Pipeline 00:37:18.783 Finished: SUCCESS